My objective is to get detail of connection that made to our servers through other clients.




AlwaysOn Availability Groups – FAQ



1. Which SQL Server Editions include AlwaysOn Availability Group functionality?

    SQL Server Enterprise Edition


2. Do I need to use a specific Windows Server edition with AlwaysOn Availability   


    You need to use Windows Server Enterprise above   



3. What are the prerequisites for AlwaysOn Availability Groups?

More information can be found here


4. How many replicas can I have in an AlwaysOn Availability Group?

5 total – 1 primary and up to 4 secondaries


5. How many databases can participate in an AlwaysOn Availability Group?

Up to 100 is the recommendation, but it’s not enforced


6. How many AlwaysOn Availability Groups can I have on an instance?

Up to 10 availability groups is the recommendation, but it’s not enforced


7.  What’s the difference between asynchronous and synchronous availability modes?


Asynchronous-commit mode is best for instances that are in different data centers. Once a transaction is written to the log, the primary sends confirmation to the client. Transaction latency is low Synchronous-commit ensures transactions are committed to all synchronous secondaries before committing to the primary replica. Transaction latency is increased


8. How many synchronous secondary replicas can I have?


You can have up to 2 synchronous replicas, but you are not required to use any. You could run all secondaries in async mode if desired


9. Can I use a secondary for read-only queries?

Yes. An active secondary can be used to offload read-only queries from the primary to a secondary instance in the availability group



10. Can I use a secondary for backup?


Yes. An active secondary can be used for some types of backups


11. What types of backups are supported on active secondaries?


You can run Copy Only Full backups and regular (non-copy only) transaction log backups on active secondaries. Differential backups are not supported on active secondaries


12. Do I have to license my secondary replicas?


Maybe. You are allowed to use a single passive secondary replica at no additional SQL Server license cost. However, a second passive secondary must be licensed. All active secondaries (used for read-only queries or backup) must be licensed


13. Can I create additional indexes or statistics on read-only secondaries to improve query performance?


No. If you require additional indexing to improve performance on a read-only secondary, then you should weigh the cost of creating and maintaining the index on the primary replica


14.  Can I create additional statistics on read-only secondaries to improve query performance?


No. However, you can allow SQL Server to automatically create statistics on read-only secondaries. Auto-created statistics are placed in tempdb. A failover or restart removes that information


15. What is the impact of running read-only workloads on active secondaries?


Read-only queries can take resources from the redo thread and slow down synchronization and add latency in synchronous availability mode.


16.   How do I limit redo thread impact of running read-only workloads on active secondaries?

Use the resource governor to limit CPU. Run DDL modifications during times of low activity


17. Can I make DDL changes to the primary in an AlwaysOn Availability Group?


Yes. DDL changes are automatically migrated to secondaries automatically


18. Do AlwaysOn Availability Groups help with database corruption?


Yes. If a corrupt page is detected, SQL Server will attempt to repair the page by getting it from another replica


19. Can I manually fail over to a secondary replica?


Yes. If the secondary is in synchronous-commit mode and is set to “SYNCHRONIZED” you can manually fail over without data loss. If the secondary is not in a synchronized state then a manual failover is allowed but with possible data loss


20. Can SQL Server automatically fail over to a secondary?


Yes. If the primary and at least one secondary are set to AUTOMATIC failover and the secondary is synchronized, then SQL Server can automatically failover


21. Can I run DBCC CHECKDB on secondary replicas?


Yes. You can run DBCC CHECKDB, but you should try and run DBCC CHECKDB on the primary as well as any secondary being used for backups


22. Are AlwaysOn Availability Groups a replacement for Log Shipping?


Probably, with the following caveats: With AlwaysOn, log records are applied immediately, so there is no delayed apply ability. If you require a delay for DR (e.g. someone accidentally drops a table and you want to try and have some time to repair from the subscriber in a log shipping setup), you may want to consider continuing to use log shipping. If you remove log shipping, you lose your log backups and they will now have to be scheduled


23. Can I use Transparent Data Encryption with AlwaysOn Availability Groups?


No. You would need to decrypt the database and follow these guidelines to add a database that was previously encrypted to an AlwaysOn Availability Group


24. Do I have to run my backup jobs on all replicas that can participate in backups?


Yes. You must schedule your backups to run on any replica that you want to (or can) participate in a backup. You can use to the system function sys.fn_hadr_backup_is_preferred_replica to determine if the current replica is the preferred one for backup


25. If I run a transaction log backup on an active secondary, what happens to the transaction logs on other replicas?


They are maintained and the logs are cleared on other replicas to maintain the log chain


26. Do AlwaysOn Availability Groups require a Witness server like mirroring?


No. AlwaysOn Availability Groups do not require a witness SQL Server. The underlying Windows cluster can (optionally) use a witness File Share as a voting member.


27.  Can I disallow read-only connections on the primary?


Yes. To enforce this, specify the read_only intent in the connection string and add only secondaries (not the primary) to the read_only_routing list. If you want to disallow direct connections to the primary from read_only connections, then set its allow_connections to read_write.


28. When failover occurs, does the application need to do anything special for an existing SQL server connection that was established via the AlwaysOn Availability Group listener or will the connection automatically reestablish and continue to work?


When a failover occurs, the application’s connection is broken. The application needs to have connection retry logic to reestablish connectivity.


29. Can my failover instances be located on different subnets and, if so, do I need to do anything different with the application connection string?


Yes, instances can be located on different subnets. Either way, to make failover as fast as possible, use MultiSubnetFailover=True in connection strings so the client attempts connections in parallel.


30.  For synchronous commit mode, what is the maximum Latency allowed before the  

secondary loses its synchronized status and is this latency configurable?


The primary will wait 10 seconds without ACKs. This is configurable via SESSION_TIMEOUT



31. If a DBA grows a data file manually on the primary, will SQL Server automatically grow the same file on secondaries?


Yes. File operations are replicated automatically






  1.    What is log shipping? Can we do logshipping with SQL Server 7.0 – Logshipping is a new feature of SQL Server 2000. We should have two SQL Server – Enterprise Editions. From Enterprise Manager we can configure the logshipping. In logshipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db and we can use this as the DR (disaster recovery) plan.
  2. Let us say the SQL Server crashed and you are rebuilding the databases including the master database what procedure to you follow? – For restoring the master db we have to stop the SQL Server first and then from command line we can type SQLSERVER –m which will basically bring it into the maintenance mode after which we can restore the master db.
  3. Let us say master db itself has no backup. Now you have to rebuild the db so what kind of action do you take? – (I am not sure- but I think we have a command to do it).
  4. What is BCP? When do we use it? – BulkCopy is a tool used to copy huge amount of data from tables and views. But it won’t copy the structures of the same.
  5. What should we do to copy the tables, schema and views from one SQL Server to another? – We have to write some DTS packages for it.
  6. What are the different types of joins and what dies each do?
  7. What are the four main query statements?
  8. What is a sub-query? When would you use one?
  9. What is a NOLOCK?
  10. What are three SQL keywords used to change or set someone’s permissions?
  11. What is the difference between HAVING clause and the WHERE clause?
  12. What is referential integrity? What are the advantages of it?
  13. What is database normalization?
  14. Which command using Query Analyzer will give you the version of SQL server and operating system?
  15. Using query analyzer, name 3 ways you can get an accurate count of the number of records in a table?
  16. What is the purpose of using COLLATE in a query?
  17. What is a trigger?
  18. What is one of the first things you would do to increase performance of a query? For example, a boss tells you that “a query that ran yesterday took 30 seconds, but today it takes 6 minutes”
  19. What is an execution plan? When would you use it? How would you view the execution plan?
  20. What is the STUFF function and how does it differ from the REPLACE function?
  21. What does it mean to have quoted_identifier on? What are the implications of having it off?
  22. What are the different types of replication? How are they used?
  23. What is the difference between a local and a global variable?
  24. What is the difference between a Local temporary table and a Global temporary table? How is each one used?
  25. What are cursors? Name four types of cursors and when each one would be applied?
  26. What is the purpose of UPDATE STATISTICS?
  27. How do you use DBCC statements to monitor various aspects of a SQL server installation?
  28. How do you load large data to the SQL server database?
  29. How do you check the performance of a query and how do you optimize it?
  30. How do SQL server 2000 and XML linked? Can XML be used to access data?
  31. What is SQL server agent?
  32. What is referential integrity and how is it achieved?
  33. What is indexing?
  34. What is normalization and what are the different forms of normalizations?
  35. Difference between server.transfer and server.execute method?
  36. What id de-normalization and when do you do it?
  37. What is better – 2nd Normal form or 3rd normal form? Why?
  38. Can we rewrite subqueries into simple select statements or with joins? Example?
  39. What is a function? Give some example?
  40. What is a stored procedure?
  41. Difference between Function and Procedure-in general?
  42. Difference between Function and Stored Procedure?
  43. Can a stored procedure call another stored procedure. If yes what level and can it be controlled?
  44. Can a stored procedure call itself(recursive). If yes what level and can it be controlled.?
  45. How do you find the number of rows in a table?
  46. Difference between Cluster and Non-cluster index?
  47. What is a table called, if it does not have neither Cluster nor Non-cluster Index?
  48. Explain DBMS, RDBMS?
  49. Explain basic SQL queries with SELECT from where Order By, Group By-Having?
  50. Explain the basic concepts of SQL server architecture?
  51. Explain couple pf features of SQL server
  52. Scalability, Availability, Integration with internet, etc.)?
  53. Explain fundamentals of Data ware housing & OLAP?
  54. Explain the new features of SQL server 2000?
  55. How do we upgrade from SQL Server 6.5 to 7.0 and 7.0 to 2000?
  56. What is data integrity? Explain constraints?
  57. Explain some DBCC commands?
  58. Explain sp_configure commands, set commands?
  59. Explain what are db_options used for?
  60. What is the basic functions for master, msdb, tempdb databases?
  61. What is a job?
  62. What are tasks?
  63. What are primary keys and foreign keys?
  64. How would you Update the rows which are divisible by 10, given a set of numbers in column?
  65. If a stored procedure is taking a table data type, how it looks?
  66. How m-m relationships are implemented?
  67. How do you know which index a table is using?
  68. How will oyu test the stored procedure taking two parameters namely first name and last name returning full name?
  69. How do you find the error, how can you know the number of rows effected by last SQL statement?
  70. How can you get @@error and @@rowcount at the same time?
  71. What are sub-queries? Give example? In which case sub-queries are not feasible?
  72. What are the type of joins? When do we use Outer and Self joins?
  73. Which virtual table does a trigger use?
  74. How do you measure the performance of a stored procedure?
  75. Questions regarding Raiseerror?
  76. Questions on identity?
  77. If there is failure during updation of certain rows, what will be the state?
  78. How many servers you were supporting?
  79. How were you maintaining the servers?
  80. What are your daily activities?
  81. How were you scheduling the jobs, through agent (or) any 3rd party tool?
  82. What is DR strategy?
  83. Users are complaining that databases are responding very slow! How will you troubleshoot the issue?
  84. What are the different types of Replication?
  85. What types of maintenance jobs were there in your previous project?
  86. What will update statistics do? What is its significance?
  87. What are the few DBCC commands you use frequently?
  88. How will you check fragmentation on table or index?
  89. What is BCP and how is it useful?
  90. What are DTS packages and have you designed any such packages?
  91. What does SQL server Upgrade Advisor tool do?
  92. Why is there a possibility for only one clustered index?
  93. How will you troubleshoot a poor performing query?
  94. What are all the filters you will be seeing in SQL Server Profiler?
  95. What are the different types of Backup?
  96. What is the difference between FULL and Differential Backup?
  97. What is your backup strategy?
  98. What are DBCCINDEX, DBCC INDEXDEFRAGE commands used for?
  99. How will you find log space usages of all databases?
  100. How much memory does SQL Server requires?
  101. How will you monitor log shipping?
  102. How will you setup Log Shipping? Explain in detailed?
  103. What are AWE configurations?
  104. What are all the jobs associated with Log Shipping?
  105. What is reorganizing index, rebuild index and what situations can you use these?
  106. What is check pint?
  107. If tempdb is full what u can do?
  108. Can you move tempdb one drive to another drive if business is running?
  109. If master database is corrupt in sql server what u can do?
  110. How many files in a database. If a ldf file is deleted in a database what you can do(Backup is not available) and mdf file is deleted what u can do?
  111. What is the usage of ndf file in a sqlserver database?
  112. If blocking occurs in sql server how to identify you in sql server and how to resolve it?
  113. If deadlock occurs in sql server how to identify you in sql server and how to resolve it ?
  114. In all how many ways can we move databases? Which is the best one?
  115. How will you move databases(system and user)to different instance?
  116. How to move a particular table in a SQL Server instance?
  117. An application is slow, how will you troubleshoot from db perspective?
  118. A user was able to connect yesterday but is unable to connect today? What will you do?
  119. Difference between Logshipping and mirroring?
  120. Ticketing tool + CR tool (are both the same and different?)
  121. What are isolation levels in sql server 2000?
  122. Difference between read committed and seriallizable in sql server 2000?
  123. How to give select and update permission in sql server?
  124. Tell me new features in sql server 2005 as DBA?
  125. Difference between truncate and shrink?
  126. Which monitoring tool used tells me three main things in monitoring tool?
  127. If an error occurs in sql server how to identify that error is application error or database error?
  128. What is view and usage of a view?
  129. What is execution plan in sql server?
  130. What is usage ofnonclustered index in sql server?
  131. If a primary server is fail how to up the secondary server?
  132. If tlog is deleted in primary server in logshipping how to restore in secondary server?
  133. What type of issues resolves in sql server recently?
  134. What type of maintenance plan in sql server?
  135. How to identify if a job is fail?
  136. How to configure error logs?
  137. What is statistics in sql server?
  138. What are statistics, under what circumstances they go out of date, how do you update them?
  139. What is stored procedure and trigger?
  140. How long the update statistics job used to run on the largest database(of size 1300GB)?
  141. If u found a block on the server when update statistics is running then what u do?
  142. How was u maintaining OLTP applications having terabytes of data?
  143. Update statistics run successfully on Sunday. After that on Wednesday large volume of data is inserted into?
  144. What will you do in case of poor performance of query?
  145. Pre requisite for migrate from 2000 to 2005?
  146. How many types of modes in database mirroring?
  147. What is database snapshot? How do you do that?
  148. How will u kill a process?
  149. How do you find out which process is getting blocked?
  150. How many cluster index and non cluster index are there?
  151. What is significance of update statistics command?
  152. What is blocking and how it is different from deadlock?
  153. How to transit the new Databases from Development team to Production Support?
  154. How identify property plays a role in case of Replicztion?
  155. Authentication Modes? Explain Mixed Mode?
  156. Database crash what will you do immediately?
  157. What will you do after installing a new SQL Server 2005 instance?
  158. Configure and Managing SQL Server?
  159. Difference between role and privilege?
  160. A database has 10 tables and user has to access only one table for a user to access?
  161. Logshipping primary crashes, how will you bring the secondary as primary?
  162. Profiler how frequently will you use?
  163. How to handle issues during installation upgrade? How will you handle those errors?
  164. Dotnet framework 2.0 minimum is required for SQL Server?
  165. What is latency period of Log shipping?
  166. Copy wizard in SQL Server 2005?
  167. How many instances do you handle?
  168. Tempdb space is increasing very rapidly what will you do?
  169. If you use Truncate only option you will loose current transactions, how will you stop that from doing?
  170. How many servers you were supporting?
  171. Were you maintaining the servers?
  172. What are your daily activities?
  173. How were you scheduling the jobs, through agent (or) any 3rd party tool?
  174. What is DR strategy?
  175. Users are complaining that databases are responding very slow! How will you troubleshoot the issue?
  176. What are the different types of Replication?
  177. What types of maintenance jobs were there in your previous project?
  178. What will update statistics do? What is its significance?
  179. What are the few DBCC commands you use frequently?
  180. How will you check fragmentation on table or index?
  181. What is BCP and how is it useful?
  182. What are DTS packages and have you designed any such packages?
  183. What does SQL server Upgrade Advisor tool do?
  184. Why is there a possibility for only one clustered index?
  185. How will you troubleshoot a poor performing query?
  186. What are all the filters you will be seeing in SQL Server Profiler?
  187. What are the different types of Backup?
  188. What is the difference between FULL and Differential Backup?
  189. What is your backup strategy?
  190. What are DBCCINDEX, DBCC INDEXDEFRAGE commands used for?
  191. How will you find log space usages of all databases?
  192. How much memory does SQL Server requires?
  193. How will you monitor log shipping?
  194. How will you setup Log Shipping? Explain in detailed?
  195. What are AWE configurations?
  196. What are all the jobs associated with Log Shipping?
  197. What is reorganizing index, rebuild index and what situations can you use these?
  198. What is check pint?
  199. If tempdb is full what u can do?
  200. Can you move tempdb one drive to another drive if business is running?
  201. If master database is corrupt in sql server what u can do?
  202. How many files in a database. If a ldf file is deleted in a database what you can do(Backup is not available) and mdf file is deleted what u can do?
  203. What is the usage of ndf file in a sqlserver database?
  204. If blocking occurs in sql server how to identify you in sql server and how to resolve it?
  205. If deadlock occurs in sql server how to identify you in sql server and how to resolve it ?
  206. In all how many ways can we move databases? Which is the best one?
  207. How will you move databases(system and user)to different instance?
  208. How to move a particular table in a SQL Server instance?
  209. An application is slow, how will you troubleshoot from db perspective?
  210. A user was able to connect yesterday but is unable to connect today? What will you do?
  211. Difference between Logshipping and mirroring?
  212. Ticketing tool + CR tool (are both the same and different?)
  213. What are isolation levels in sql server 2000?
  214. Difference between read committed and seriallizable in sql server 2000?
  215. How to give select and update permission in sql server?
  216. Tell me new features in sql server 2005 as DBA?
  217. Difference between truncate and shrink?
  218. Which monitoring tool used tells me three main things in monitoring tool?
  219. If an error occurs in sql server how to identify that error is application error or database error?
  220. What is view and usage of a view?
  221. What is execution plan in sql server?
  222. What is usage ofnonclustered index in sql server?
  223. If a primary server is fail how to up the secondary server?
  224. If tlog is deleted in primary server in logshipping how to restore in secondary server?
  225. What type of issues resolves in sql server recently?
  226. What type of maintenance plan in sql server?
  227. How to identify if a job is fail?
  228. How to configure error logs?                                                                                      
  229. What is statistics in sql server?               
  230. What are statistics, under what circumstances they go out of date, how do you update them?
  231. What is stored procedure and trigger?
  232. How long the update statistics job used to run on the largest database(of size 1300GB)?
  233. If u found a block on the server when update statistics is running then what u do?
  234. How was u maintaining OLTP applications having terabytes of data?
  235. Update statistics run successfully on Sunday. After that on Wednesday large volume of data is inserted into?
  236. What will you do in case of poor performance of query?
  237. Pre requisite for migrate from 2000 to 2005?
  238. How many types of modes in database mirroring?
  239. What is database snapshot? How do you do that?
  240. How will u kill a process?
  241. How do you find out which process is getting blocked?
  242. How many cluster index and non cluster index are there?
  243. What is significance of update statistics command?
  244. What is blocking and how it is different from deadlock?
  245. How to transit the new Databases from Development team to Production Support?
  246. How identify property plays a role in case of Replicztion?
  247. Authentication Modes? Explain Mixed Mode?
  248. Database crash what will you do immediately?
  249. What will you do after installing a new SQL Server 2005 instance?
  250. Configure and Managing SQL Server?
  251. Difference between role and privilege?
  252. A database has 10 tables and user has to access only one table for a user to access?
  253. Logshipping primary crashes, how will you bring the secondary as primary?
  254. Profiler how frequently will you use?
  255. How to handle issues during installation upgrade? How will you handle those errors?
  256. Dotnet framework 2.0 minimum is required for SQL Server?
  257. What is latency period of Log shipping?
  258. Copy wizard in SQL Server 2005?
  259. How many instances do you handle?
  260. Tempdb space is increasing very rapidly what will you do?
  261. If you use Truncate only option you will loose current transactions, how will you stop that from doing?
  262. How many type of mirroring ?
  263. Difference between transaction replication and merge replication ?
  264. If your database primary file size is 40 GB and log file size 60 GB . Then how much space you need to take backup or for restore that database?
  265. How you manage the memory of your database ?
  266. Load balancing is supported by sql 2005 ?
  267. Syntax for transaction log backup file ?
  268. How you check the space use by log file ?
  269. Difference between sql 2000 and sql 2005?
  270. Which command use to find the block process id activity ?
  271. If replication is failed because of network issue . what configuration we need to configure to start the merge replication or log shipping to run automatically?
  272. If mirror database is full, how we can spring the file ?
  273. If dbcc shrink does not shrink the log , how you trouble shoot this issue?
  274. How do you go about expanding your knowledge of SQL Server?
  275. Do you know when and where the local SQL Server User Group meets?
  276. Tell me about your experience with SQL Server – when did you start, what things have you done?
  277. What are some of the new features in SQL Server 2008? 2012?
  278. In your experience, what are the causes of poor performance in SQL Server?
  279. In your experience, what are the causes of deadlocks?
  280. How can you get a deadlock graph from SQL Server?
  281. What is a “wait”?
  282. Why are cursors generally considered a bad idea to use in SQL Server?
  283. What is a “Tally Table”?
  284. What is a “read-ahead read”?
  285. What are the different types of backups that can be performed?
  286. How does the “WITH COPY_ONLY” option affect the different types of backups?
  287. When restoring a database, what do the REDO and UNDO portions of the process do?
  288. What is a VLF?
  289. What conditions must be met to mark a VLF as inactive?
  290. Under what conditions can dirty data pages be written to the data file?
  291. Can dirty pages with an ongoing active transaction be written to the data file?
  292. What causes a transaction log to be cleared?
  293. What is a CHECKPOINT?
  294. How does it perform differently based upon the recovery model that the database is in?
  295. Does a checkpoint write to the transaction log?
  296. What is “Log Space Reservation”?
  297. How can you break a transaction log chain?
  298. What are some examples of minimally logged operations?
  299. What recovery model does the database need to be in to perform minimal logging?
  300. What ramifications are there when performing a minimally logged operation?
  301. Is there anything different about the next transaction log backup? If so, what?
  302. Can you restore to a point-in-time with this transaction log backup?
  303. What operations in SQL Server are NOT logged?
  304. Table Variables?
  306. What recovery model does the database need to be in for the TRUNCATE TABLE statement to be minimally logged?
  307. What are the Pros and Cons of executing the command “BACKUP LOG WITH TRUNCATE_ONLY”?
  308. What is a “Page Split”?
  309. What are the Pros and Cons of shrinking database files?
  310. How does fragmentation affect performance?
  311. How does fragmentation affect the size of the IO being read?
  312. How are UNIQUE and PRIMARY KEY constraints enforced in SQL Server?
  313. What is a heap?
  314. What are the different types of indexes (beyond clustered/non-clustered)?
  315. What attributes constitute a good clustering key?
  316. Is the data in a clustered index actually stored on disk in strict physical order?
  317. How does the choice of your clustered index key affect your non-clustered indexes?
  318. What kind of indexes can you have on a table variable?
  319. How does the use of a table variable affect the execution plan generated for a query?
  320. How many indexes can you have on a table? Table Variable?
  321. Should the Primary Key ALWAYS be a clustered index? Why or why not?
  322. What is a “uniqueifier”?
  323. What is the difference between an “Active/Active” and an “Active/Passive” cluster?
  324. What is a “server-side trace”?
  325. How do you create a server-side trace?
  326. How is a server-side trace different from a trace created by Profiler?
  327. What is “Instant File Initialization”?
  328. How does using this help SQL Server?
  329. What database files can this be used on?
  330. What are the windowing functions in SQL Server?
  331. How can you get an execution plan for a query on a server when you don’t want to run that query on that server (i.e. a production server)?
  332.  What maintenance operations do you run on databases, and how frequently?
  333. In an execution plan, you can see iterators with “rebind” and “rewind”. What’s the difference between these?
  334. Explain Few of the New Features of SQL Server 2008 Management Studio
  335. Explain IntelliSense for Query Editing
  336. Explain MultiServer Query
  337. Explain Query Editor Regions
  338. Explain Object Explorer Enhancements
  339. Explain Activity Monitors
  340. What is Service Broker?
  341. Where are SQL server Usernames and Passwords Stored in the SQL server?
  342. What is Policy Management?
  343. What is Database Mirroring?
  344. What are Sparse Columns?
  345. What does TOP Operator Do?
  346. What is CTE?
  347. What is MERGE Statement?
  348. What is Filtered Index?
  349. Which are the New Data Types Introduced in SQL SERVER 2008?
  350. What are the Advantages of Using CTE?
  351. How can we Rewrite Sub-Queries into Simple Select Statements or with Joins?
  352. What is CLR?
  353. What are Synonyms?
  354. What is LINQ?
  355. What are Isolation Levels?
  356. What is Use of EXCEPT Clause?
  357. What is XPath?
  358. What is NOLOCK?
  359. What is the Difference between Update Lock and Exclusive Lock?
  360. How will you Handle Error in SQL SERVER 2008?
  361. What is RAISEERROR? What is RAISEERROR?
  362. How to Rebuild the Master Database?
  363. What is the XML Datatype?
  364. What is Data Compression?
  365. What is Use of DBCC Commands?
  366. How to Copy the Tables, Schema and Views from one SQL Server to Another?
  367. How to Find Tables without Indexes?
  368. How to Copy Data from One Table to Another Table?
  369. What is Catalog Views?
  370. What is PIVOT and UNPIVOT?
  371. What is a Filestream?
  372. What is SQLCMD?
  373. What do you mean by TABLESAMPLE?
  374. What is ROW_NUMBER()?
  375. What are Ranking Functions?
  376. What is Change Data Capture (CDC) in SQL Server 2008?
  377. How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table?
  378. What is the CPU Pressure?
  379. How can I Get Data from a Database on Another Server?
  380. What is the Bookmark Lookup and RID Lookup?
  381. What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE?
  382. What is Difference between GETDATE and SYSDATETIME in SQL Server 2008?
  383. How can I Check that whether Automatic Statistic Update is Enabled or not?
  384. How to Find Index Size for Each Index on Table?
  385. What is the Difference between Seek Predicate and Predicate?
  386. What are Basics of Policy Management?
  387. What are the Advantages of Policy Management?
  388. What are Policy Management Terms?
  389. What is the ‘FILLFACTOR’?
  390. Where in MS SQL Server is ’100’ equal to ‘0’?
  391. What are Points to Remember while Using the FILLFACTOR Argument?
  392. What is a ROLLUP Clause?
  393. What are Various Limitations of the Views?
  394. What is a Covered index?
  395. When I Delete any Data from a Table, does the SQL Server reduce the size of that table?
  396. What are Wait Types?
  397. How to Stop Log File Growing too Big?
  398. If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor?
  399. What is Data Warehousing?
  400. What is Business Intelligence (BI)?
  401. What is a Dimension Table?
  402. What is Dimensional Modeling?
  403. What is a Fact Table?
  404. What are the Fundamental Stages of Data Warehousing?
  405. What are the Different Methods of Loading Dimension tables?
  406. Describes the Foreign Key Columns in Fact Table and Dimension Table?
  407. What is Data Mining?
  408. What is the Difference between a View and a Materialized View?
  409. What is OLTP?
  410. What is OLAP?
  411. What is the Difference between OLTP and OLAP?
  412. What is ODS?
  413. What is ER Diagram?
  414. What is ETL?
  415. What is VLDB?
  416. Is OLTP Database is Design Optimal for Data Warehouse?
  417. If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form?
  418. What are Lookup Tables?
  419. What are Aggregate Tables?
  420. What is Real-Time Data-Warehousing?
  421. What are Conformed Dimensions?
  422. What is a Conformed Fact?
  423. How do you Load the Time Dimension?
  424. What is a Level of Granularity of a Fact Table?
  425. What are Non-Additive Facts?
  426. What is a Factless Facts Table?
  427. What are Slowly Changing Dimensions (SCD)?
  428. What is Hybrid Slowly Changing Dimension?
  429. What is BUS Schema?
  430. What is a Star Schema?
  431. What Snow Flake Schema?
  432. Differences between the Star and Snowflake Schema?
  433. What is Difference between ER Modeling and Dimensional Modeling?
  434. What is Degenerate Dimension Table?
  435. Why is Data Modeling Important?
  436. What is a Surrogate Key?
  437. What is Junk Dimension?
  438. What is a Data Mart?
  439. What is the Difference between OLAP and Data Warehouse?
  440. What is a Cube and Linked Cube with Reference to Data Warehouse?
  441. What is Snapshot with Reference to Data Warehouse?
  442. What is Active Data Warehousing?
  443. What is the Difference between Data Warehousing and Business Intelligence?
  444. What is MDS?
  445. What is the best way to update data between SQL Servers?
  446. What are the scenarios you will need multiple databases with schema?
  447. How will you plan your replication?
  448. What are publisher, distributor and subscriber in “Replication”?
  449. What is “Push” and “Pull” subscription?
  450. Can a publication support push and pull at one time?
  451. What are different models/types of replication?
  452. What is Snapshot replication?
  453. What are the advantages and disadvantages of using Snapshot replication?
  454. What type of data will qualify for “Snapshot replication”?
  455. What is the actual location where the distributor runs?
  456. Can you explain in detail how exactly “Snapshot Replication” works?
  457. What is merge replication?
  458. How does merge replication works?
  459. What are advantages and disadvantages of Merge replication?
  460. What is conflict resolution in Merge replication?
  461. What is a transactional replication?
  462. Can you explain in detail how transactional replication works?
  463. What are data type concerns during replications?
  464. Whenever a client/User requests to a Sql Server what exactly happens internally?
  465. which components will handle the requests and how the response is given to user?
  466. How do you migrate/upgrade from sql server 2000 to sql server 2005? What tool you use to upgrade before upgrading sql server?
  467. How to move sql server 2000 master database to sql server 2005 server ?
  468. What is the difference between ‘NoRecovery’ & ‘Standby’ in Logshipping?
  469. Whenever a job fails, where do you see the job information in sql server ?
  470. General concepts of High Availability.?
  471. How Log shipping exactly works?
  472. In Log shipping process, Primary Server’s Primary Database is crashed after taking backup, then how you bring the secondary server into Online without any data loss?
  473. What else is to be done after performing manual failover in log shipping? [ Hint: logins and db users have to be mapped?
  474. Do sql server Roles are copied into secondary server when a manual failover is done in Log shipping?
  475. What is Merger Replication? How do you solve Merge Conflicts?
  476. What is Peer-to-peer Replication? What are its advantages? [ Hint: Load Balancing]


How Transaction Logs are copied from Principal server to Mirror server in Database mirroring?

  1. What happens when you create a Primary Key on a column of a table?
  2. Can we create a clustered index without creating a Primary key on a column of a table?
  3. What exactly is a clustered index? What is non-clustered index?
  4. How many clustered indexes can be created on a single table?
  5. How many non-clustered indexes can be created on a single table in sql server 2005?
  6. How many non-clustered indexes can be created on a single table in sql server 2008?
  7. How data is stored internally in Clustered Indexes? [in B-Tree format]
  8. What you do when tempdb is full?


  1. when will you Rebuild/Reorganize Indexes? what criteria we should follow for the same?


1) How to resolve Allocation and consistency Errors?


Run DBCC checkdb DBName

Repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (SomeDB).


2) How to Recover Suspected Database without data loss?


EXEC sp_resetstatus [Your_Database];


DBCC checkdb ([Your_Database])


Note: Instead of above query DBCC CheckDB ([Your_Database],


Use below parameters.



Specifies that DBCC CHECKDB repair the found errors. The specified database must be in single-user mode to use one of the following repair options.


Tries to repair all reported errors. These repairs can cause some data loss.


Maintains syntax for backward compatibility only. No repair actions are performed.


Performs repairs that have no possibility of data loss. This can include quick repairs, such as repairing missing rows in non-clustered indexes, and more time-consuming repairs, such as rebuilding an index.


3) SQL Server service does not start successfully because of a logon failure


This problem occurs because the password for the SQL Server startup service account or the SQL Server Agent startup service account is not correct. This problem can occur when the password for the account is changed but the password information was not updated for the SQL Server service or the SQL Server Agent service.

     To solve this problem, type the correct password in the Microsoft SQL Server service account on the SQL Server host computer.

To correct the password in Microsoft Windows 2000 Server and Microsoft Windows XP:

  1. Click Start, point to Settings, and then click Control Panel.
  2. Double-click Administrative Tools, and then double-click Services.
  3. Use one of the following steps based on your instance type:

o For a default instance of SQL Server, double-click MSSQLServer, and then click the Log On tab.

o For a named instance of SQL Server, double-click MSSQL$YourInstanceName, and then click the Log On tab. Replace YourInstanceName with the actual name of the SQL Server instance.

o For a default instance of SQL Server Agent, double-click SQLAgent, and then click the Log On tab.

o For a named instance of SQL Server Agent, double-click SQLAgent$YourInstanceName, and then click the Log On tab. Replace YourInstanceName with the actual name of the SQL Server instance.

  1. Type the correct password in the Password and Confirm password textbox, and then click OK.

To correct the password in Microsoft Windows NT 4.0:

  1. Click Start, point to Settings, and then click Control Panel.
  2. Use one of the following steps based on your instance type:

o For a default instance of SQL Server, double-click MSSQLServer, and then click the Log On tab.

o For a named instance of SQL Server, double-click MSSQL$YourInstanceName, and then click the Log On tab. Replace YourInstanceName with the actual name of the SQL Server instance.

o For a default instance of SQL Server Agent, double-click SQLAgent, and then click the Log On tab.

o For a named instance of SQL Server Agent, double-click SQLAgent$YourInstanceName, and then click the Log On tab. Replace YourInstanceName with the actual name of the SQL Server instance.

  1. Type the correct password in the Password and Confirm password textbox, and then click OK.

NOTE: Changing the password through the Services applet allows SQL Server to start successfully; however, if Microsoft Search (Full-Text Service) is installed, it requires a subsequent password change through SQL Enterprise Manager (SEM).

Note If you are using SQL Server 2005, see the “Changing passwords and user accounts” topic in SQL Server 2005 Books Online.

The following section is from SQL Server 2000 Books Online:

“After changing the SQL Server service account information in Control Panel, you must also change the SQL Server service account in SQL Server Enterprise Manager. This allows the service account information for Microsoft Search service to remain synchronized as well.

Important: Although the Microsoft Search service is always assigned to the local system account, the full-text search engine tracks the SQL Server service account in Windows. Full-text search and failover clustering are not available if Windows password changes are not reset using SQL Server Enterprise Manager.”

Using SEM, right-click the Server, click Properties, and then click the Security tab. Under Startup service account, enter the appropriate account and password.


4.  SQL Server Agent Service not started what is the possible reason?


  1. Click Start, click Programs, and then click SQL Server 2008.
  2. Click Configuration Tools, and then click SQL Server Configuration Manager.
  3. Expand SQL Server 2008 Services.
  4. Locate the SQL Server Agent service. The SQL Server Agent service is named “SQL Server Agent” for default instances and is named “SQL Server Agent (Instance name)” for named instances.
  5. Right-click SQL Server Agent, and then click Properties.
  6. On the Log On tab, click to select the account check box. Specify a different account name and password.
  7. In the Service status section, click Start, and then click OK.


5.  What is Statistics? Where it will be stored? How to find it?


Stats exist on every index, however stats can also exist on columns that are not indexed.


UPDATE STATISTICS Sales.SalesOrderDetail;

UPDATE STATISTICS Sales.SalesOrderDetail AK_SalesOrderDetail_rowguid;




6. How to Assigning specific processors to SQL Server instances


Processor affinity should probably be tested carefully. But if you have pretty fast machines with lots of cores, I would think that your bottleneck would probably NOT end up being your processors, but your IO subsystem (ie: disks). However, mileage may vary. I’ve seem a LOT more systems bog down because of disk and memory that CPU. But it depends on your app so it is hard to tell. Also keep in mind you have to do the processor affinity on each SQL instance keeping in mind ALL instances on ALL other boxes, so on box 1 you might allocate cores 1-4 to instance 1, and 5-6 to instance 2, then when yo go to box 2 you would probably start at core 7, etc..

Short answer, I wouldn’t do it. I would need a good reason to do it LATER. Be careful how many instances you have running. And let them use the resources they have, monitor for problems. If you have a problem instance you might just limit it..


7. How to check port available.


Identify SQL Server TCP IP port being used

USE master


xp_readerrorlog 0, 1, N’Server is listening on’, ‘any’, NULL, NULL, N’asc’




sp_readerrorlog 1,1,’listening’,’server’


8.  What is table scan?


9.  Index Seek Vs. Index Scan (Table Scan)


Index Scan retrieves all the rows from the table. Index Seek retrieves selective rows from the table.

Index Scan:

Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table. Thus, a scan is an efficient strategy if the table is small or if most of the rows qualify for the predicate.

Index Seek:

Since a seek only touches rows that qualify and pages that contain these qualifying rows, the cost is proportional to the number of qualifying rows and pages rather than to the total number of rows in the table.

Index Scan is nothing but scanning on the data pages from the first page to the last page. If there is an index on a table, and if the query is touching a larger amount of data, which means the query is retrieving more than 50 percent or 90 percent of the data, and then optimizer would just scan all the data pages to retrieve the data rows. If there is no index, then you might see a Table Scan (Index Scan) in the execution plan.

Index seeks are generally preferred for the highly selective queries. What that means is that the query is just requesting a fewer number of rows or just retrieving the other 10 (some documents says 15 percent) of the rows of the table.

In general query optimizer tries to use an Index Seek which means that optimizer has found a useful index to retrieve record set. But if it is not able to do so either because there is no index or no useful indexes on the table then SQL Server has to scan all the records that satisfy query condition.


10. What Happens During a Table Scan


Suppose you aren’t much of an organized person, and your cubicle at work isn’t really well organized.  You’ve got papers laying all around, some filed, some stacked, some in the filing cabinet, and some in folders. Generally speaking, it’s hard to find data like that; then suppose your boss asks you for all of your notes related to a specific project that you are working on.  Unless you work for a very large company with ample developers and enough resources to have you dedicate yourself to one project at a time, you probably have multiple projects going on, and notes on all of them are mixed around the cubicle.

So how do you find everything?

Answer: you look through every stack, folder, drawer, and box you’ve got; and then hope against hope you have it all.  Well, this is essentially a table scan, your cubicle is a table, and you my friend, are the Query Processing engine.

When a query is submitted to SQL Server, it attempts to determine the best way to execute a query, generating what is called a “Query Execution Plan”, that describes how it will go about fulfilling the request to find every note related to the project, and return them to the boss.  The reason these are typically considered a Very Bad Thing ® is because when SQL Server has to search through all of the data in a table to satisfy a query, it consumes more resources than it really needs to.


11.   How to Avoid Table Scans


If you are an organized person, you keep all of your project notes neatly organized, filed away, and close to each other.  SQL Server has its way of doing this as well, it’s called an index; an index is a way of keeping track of what data you have, and where it resides in order to enable a fast retrieval of it when it is requested.

Now, building an index in SQL Server is easy, however knowing when and where to build them is a bit more of an art.  SQL Server performance tuning is absolutely dependent on indexes, and being knowledgeable enough about how to write queries that take advantage of them is essential.  Generally speaking, if a column in a table is referenced frequently in queries, then it should have an index on it, the more read intensive your database is, the more this holds true.  It is not usually a good idea to have indexes on every column in a table for multiple reasons, chief among these is space considerations; under the covers, indexes are copies of the data from specific columns in a table extracted and organized for fast searching, so to build an index will take up space, roughly equivalent to the volume of data being indexed.

Another consideration to take in account when building indexes is whether or not your data is added to or updated very frequently; the more often  data in a table changes, the greater the impact indexes will have in performance.  Every time indexed data changes, SQL Server has to ensure that the index is kept up to date, which adds to the processing overhead which often times is overlooked in database design and performance considerations.


12.  T-SQL command for Execution plan?


When SET SHOWPLAN_ALL is ON, SQL Server returns execution information for each statement without executing it, and Transact-SQL statements are not executed. After this option is set ON, information about all subsequent Transact-SQL statements are returned until the option is set OFF. For example, if a CREATE TABLE statement is executed while SET SHOWPLAN_ALL is ON, SQL Server returns an error message from a subsequent SELECT statement involving that same table, informing users that the specified table does not exist. Therefore, subsequent references to this table fail. When SET SHOWPLAN_ALL is OFF, SQL Server executes the statements without generating a report.

SET SHOWPLAN_TEXT and SET SHOWPLAN_ALL cannot be specified inside a stored procedure; they must be the only statements in a batch.


13. Use Resource Governor to Limit CPU Usage


Configuring Resource Governor to Limit CPU Usage

Ensure that Resource Governor is enabled

In this Resource Governor scenario, configuration comprises the following basic steps:

Create and configure a Resource Governor Resource pool that limits the maximum average CPU bandwidth that will be given to requests in the resource pool when CPU contention occurs.

Create and configure a Resource Governor Workload group that uses this pool.

Create a classifier function, which is a user-defined function (UDF) whose return values are used by Resource Governor for classifying sessions so that they are routed to the appropriate workload group.

Register the classifier function with Resource Governor.

Apply the changes to the Resource Governor in-memory configuration.


To configure Resource Governor for limiting CPU usage (Transact-SQL)

Issue a CREATE RESOURCE POOL statement to create a resource pool. The example for this procedure uses the following syntax:


Value is an integer from 1 to 100 that indicates the percentage of maximum average CPU bandwidth. The appropriate value depends on your environment. For the purpose of illustration, the example in this topic uses 20% percent (MAX_CPU_PERCENT = 20.)

Issue a CREATE WORKLOAD GROUP statement to create a workload group for low-priority operations whose CPU usage you want to govern. The example for this procedure uses the following syntax:

CREATE WORKLOAD GROUP group_name USING pool_name;

Issue a CREATE FUNCTION statement to create a classifier function that maps the workload group created in the preceding step to the user of the low-priority login. The example for this procedure uses the following syntax:

CREATE FUNCTION [schema_name.]function_name () RETURNS sysname




DECLARE @workload_group_name AS sysname

IF (SUSER_NAME() = ‘user_of_low_priority_login’)

SET @workload_group_name = ‘workload_group_name’

RETURN @workload_group_name


For information about the components of this CREATE FUNCTION statement, see:

DECLARE @local_variable (Transact-SQL)


Important note Important

SUSER_NAME is just one of several system functions that can be used in a classifier function. For more information, see Create and Test a Classifier User-Defined Function.

SET @local_variable (Transact-SQL) .

Issue an ALTER RESOURCE GOVERNOR statement to register the classifier function with Resource Governor. The example for this procedure uses the following syntax:


Issue a second ALTER RESOURCE GOVERNOR statement to apply the changes to the Resource Governor in-memory configuration, as follows:



14.   TempDB growing fastly? What will be the reason?

 Check what is causing the TempDB to grow by running the following queriesSelect * from sys.dm_db_file_space_usageSelect * from sys.dm_db_session_file_usageSelect * from sys.dm_db_task_space_usage In my experience, the most likely cause for something like this is an out-of-control Cross Join query (though there are lots of other possibilities). There are a couple of possible reasons for TempDB filling up. The most obvious is of course a process that fills up a large temp table. One way to investigate this, would be to do: SELECT * FROM TempDB. Sysobjects (NOLOCK) WHERE xtype = ‘U’ This tells you, if there are any temp tables at all right now. You might be able to get an indication of sizes by looking in sysindexes: SELECT object name (id), reserved FROM TempDB. Sysindexes (NOLOCK) WHERE in did in (0, 1) reserved gives you the size in 8192 KB pages. But the numbers may lag behind and not be accurate. There are other sources than temp tables: sort operations, cursors, worktables for queries. These are more difficult to track down. You could shrink TempDB, and trace for auto-grow events, and see if you can connect the events to a certain user or host. 15.  Possible reasons for TempDB Full Issue


SQL Server makes extensive use of TempDB when:

  • Tracking versions for row-versioning concurrency.
  • Performing bulk load operations on tables with triggers enabled.
  • Running DBCC CHECKDB.
  • Rebuilding an index with SORT_IN_TEMPDB option.
  • Variables of LOB data types.
  • Storing intermediate query results, for example, during joins, aggregates, or sorts.
  • Service broker dialog information.
  • Caching temporary objects and tables.
  • Storing inserted and deleted tables in triggers.
  • Running sp_xml_preparedocument.

If you’re SQL Server Agent Job doing any of this then this will the cause for TempDB growth. I suggest you to read my article ie. Recovering from out-of-disk space conditions for TempDB. This will hopefully help you to resolve this issue. 

 Another potential problem that you need to tackle as a DBA is running out of disk space on the hard disk that contains tempdb.This is because SQL Server makes extensive use of tempdb when: Viewing tempdb space usageThe following dynamic management views (DMVs) can be used to report information about tempdb space:·         sys.dm_db_session_space_usage – Retrieves the number of pages allocated and deallocated by each session for the database.·         sys.dm_db_task_space_usage – Retrieves the page allocation and deallocation activity by task for the database.·         sys.dm_db_file_space_usage – Retrieves information about disk space used by any database.


17 Why is TempDB full, and how can I prevent this from happening?


Note that the majority of this article describes symptoms and workarounds for any database that is larger than you think it should be; it is not applicable only to tempdb. 

SQL Server allocates a database called tempdb, primarily for worktable / #temp table usage. Sometimes, you will have one of the following symptoms: 

  • An error message in the event log: 

Event ID: 17052
Description: The log file for database ‘tempdb’ is full.
Back up the transaction log for the database to free up
some log space 

  • An error message in Query Analyzer:   

Server: Msg 8624, Level 16, State 1
Internal SQL Server error


Server: Msg 1101, Level 17, State 10, Line 1
Could not allocate new page for database ‘TEMPDB’. There are no more pages available in filegroup DEFAULT. Space can be created by dropping objects, adding additional files, or allowing file growth.

  • Or you will notice that the files are much bigger than they should be — by using EXEC sp_spaceused, looking at the taskpad view in Enterprise Manager, seeing the MDF/LDF files themselves within Windows Explorer, or being alerted by monitoring software like SiteScope or Quest Spotlight.




Usually, tempdb fills up when you are low on disk space, or when you have set an unreasonably low maximum size for database growth.

Many people think that tempdb is only used for #temp tables. When in fact, you can easily fill up tempdb without ever creating a single temp table. Some other scenarios that can cause tempdb to fill up: 

  • any sorting that requires more memory than has been allocated to SQL Server will be forced to do its work in tempdb; 
  • if the sorting requires more space than you have allocated to tempdb, one of the above errors will occur;
  • DBCC CheckDB(‘any database’) will perform its work in tempdb — on larger databases, this can consume quite a bit of space;  
  • DBCC DBREINDEX or similar DBCC commands with ‘Sort in tempdb’ option set will also potentially fill up tempdb;
  • large resultsets involving unions, order by / group by, cartesian joins, outer joins, cursors, temp tables, table variables, and hashing can often require help from tempdb;   
  • any transactions left uncommitted and not rolled back can leave objects orphaned in tempdb;
  • use of an ODBC DSN with the option ‘create temporary stored procedures’ set can leave objects there for the life of the connection.

Other points of analysis

The following will tell you how tempdb’s space is allocated:

USE tempdb
EXEC sp_spaceused

The following should give you some clues as to which table(s) consume most of the space in the data file(s) — this will help you narrow down any transactions that are either taking a long time or repeatedly being left in limbo:

USE tempdb

    FROM tempdb..sysobjects

    FROM tempdb..sysindexes
    ORDER BY rowcnt DESC

The higher rowcount values will likely indicate the biggest temporary tables that are consuming space. And while it won’t tell you everything, since tempdb is used for internal I/O and other processes such as sorting, it may help you narrow down the stored procedure(s) that are causing the growth (you can query INFORMATION_SCHEMA.ROUTINES for ROUTINE_DEFINITION LIKE ‘%#table_name%’ from above). 

In addition to this, you can use Profiler to watch for events like database file auto grow and log file auto grow. If this is happening often, then you know that the space you’ve allocated to tempdb is not sufficient.

You can also watch performance monitor’s counter for PhysicalDisk: CurrentDiskQueueLength on the drive where tempdb exists. If this number is consistently greater than 2, then there is likely a bottleneck in disk I/O.

Short-term fix

Restarting SQL Server will re-create tempdb from scratch, and it will return to its usually allocated size. In and of itself, this solution is only effective in the very short term; assumedly, the application and/or T-SQL code which caused tempdb to grow once, will likely cause it to grow again. 

To shrink tempdb, you can consider using DBCC ShrinkDatabase, DBCC ShrinkFile (for the data or the log file), or ALTER DATABASE. See KB #256650KB #272318 and KB #307487 for more information.

If you can’t shrink the log, it might be due to an uncommitted transaction. See if you have any long-running transactions with the following command: 


Check the oldest transaction (if it returns any), and see who the SPID is (there will be a line starting with ‘SPID (Server Process ID) : <number>’). Use that <number> in the following:


This will tell you at least a portion of the last SQL command executed by this SPID, and will help you determine if you want to end this process with: 

KILL <number> 



Long-term prevention

Here are some suggestions for maintaining a healthy tempdb: 

  • Make sure that tempdb is set to autogrow — do *NOT* set a maximum size for tempdb. If the current drive is too full to allow autogrow events, then buy a bigger drive, or add files to tempdb on another device (using ALTER DATABASE) and allow those files to autogrow. You will need at least one data file and at least one log file in order to avoid this problem from halting your system in the future.
  • For optimal performance, make sure that its initial size is adequate to handle a typical workload (autogrow events can cause performance to suffer as it allocates new extents). For an approach to setting a non-default size for tempdb, see the suggestion from Dinesh at….  
  • If possible, put tempdb on its own physical disk, array or disk subsystem (see KB #224071 for more information).
  • To prevent tempdb log file growth, make sure tempdb is in simple recovery mode (this allows the log to be truncated automatically). To check if this is the case:  

— SQL Server 7.0, should show ‘trunc. log on chkpt.’
— or ‘recovery=SIMPLE’ as part of status column:

EXEC sp_helpdb ‘tempdb’

— SQL Server 2000, should yield ‘SIMPLE’:


  • If the database is not set to simple recovery, you can force it so as follows:   


  • Use SQLOLEDB, not ODBC / DSN for database access (for VB / ASP, see Article #2126 for sample connection strings).  
  • Try to make sure you have covering indexes for all large table that are used in queries that can’t use a clustered index / index seek.  
  • Batch larger heavily-logged operations (especially deletes) that *might* overflow into tempdb into reasonable ‘chunks’ of rows, especially when joins are involved.  
  • Pore over your code for potential uncommitted transactions and other elements from the list at the top of the page.  
  • In general, try to make your code as efficient as possible… avoid cursors, nested loops, and #temp tables if possible. See Article #2424 for some other general ideas on efficiency.
  • Check out the WebCast in KB #834846 for some ideas from Microsoft about administering and maintaining TempDB.


18.  If SQL Server not working, in this situation, How to find Error Log path,


In SQL Server Configuration Manager, click SQL Server Services on the left side and then right click on SQL Server (MSSQLSEVER) and select Properties from the drop down as shown below. For a named instance, right click on the appropriate SQL Server (INSTANCENAME) service.


In SQL Server (MSSQLSERVER) Properties window click on the advanced tab and then expand the drop down next to Startup Parameters. The location of SQL Server Error Log file is mentioned next to the “-e” startup parameter as highlighted in the snippet below.

Identify SQL Server Error Log file used by SQL Server Database Engine Using Application Event Viewer

1. Click Start -> All Programs -> Administrative Tools -> Server Manager.

2. In Server Manager, expand Diagnostics, expand Event Viewer, expand Windows Logs and then selectApplication on the left side panel. In the right panel you need to filter for events with Event ID 17111 as shown in the below snippet. To set a filter right click on Application and select Filter Current Log.

3. To view the location of SQL Server Error Log file double click an event and you can see the event properties as shown below.


19.  How to find Isolation Level?

DBCC useroptions


20.  What is the difference between 64 bit and 32 Bit SQL server 2008?


Mainly it starts on a hardware level by the way the CPU) handles information. For example, a 32 bit CPU can only handle 2^32 bit, which limits it to 2 GB of RAM. The 64-bit can handle 2^64, which is a very large amounts of memory. You can easily conclude that any hardware improvement will positively affect every software that runs over it and SQL Server is no exception.


21. Log not truncating in Simple Recovery.

Simple Recovery Model

Under a Simple Recovery Model SQL will truncate the transaction log when a checkpoint is created. There are many Factors that can cause a checkpoint to be created.  One of these is a Database backup, any shadow protect backup will be considered a Database backup under the following circumstances.

  1. The SQL VSS writer is installed and enabled.
  2. The Database and Log files are being backup up.
  3. Both files need to be backed up in the same job.
  4. They do not have to be on the same volume.


22.  Difference between Index Scan and Index Seek


Table1 has five columns: Col1, Col2, Col3, Col4, Col5
Index1 on Table1 contains two columns: Col1, Col3
Query1 on Table1 retrieves two columns: Col1, Col5


Now when Query1 is ran on Table1 it will use search predicates Col1,Col5 to figure out if it will use Index1 or not. As Col1, Col5 of Query1are not same as Col1, Col3 or Index1 there are good chances that Query1 will not use Index1. This scenario will end up in table scan. If Query1 would have used Index1 it would have resulted in table seek.

Index Scan happens when index definition cannot point close on single row to satisfy search predicates. In this case SQL Server has to scan multiple pages to find range of rows which satisfy search predicates. In case of Index Seek SQL Server finds single row matching search predicates using index definition.


23. How can I modify an Execution Plan to use a Seek instead of a Scan?


When SQL Server is looking for your data probably one of the largest things which will make SQL Server switch from a seek to a scan is when some of the columns are you looking for are not included in the index you want it to use. Most often this will have SQL Server fall back to doing a clustered index scan, since the Clustered index contains all the columns in the table. This is one of the biggest reasons (in my opinion at least) that we now have the ability to INCLUDE columns in an index, without adding those columns to the indexed columns of the index. By including the additional columns in the index we increase the size of the index, but we allow SQL Server to read the index, without having to go back to the clustered index, or to the table itself to get these values.


24. How can Processors assign to SQL Server?


RC on Instance Properties > select Processors >


25. What is .TUF & .WRK File?


  • TUF File: It’s Transaction Undo File. It Generated only when you Have Configured Log Shipping With Stand by Option. Since In Stand by Log Shipping Secondary Database is Available to User. So TUF Keep Pending Transaction Which are in Log File Came from Primary So That when Next Log Backup Will Come From Primary They Can Be Synchronized At Secondary.
  • .WRK :  This Extension Is Given To A File Which is  Being Copied From Primary Backup Location to Secondary and Once Copy Process has been completed these file are renamed with .trn file. 


26. SQL Server Agent Service is not working?


 Change the service account to a system account such as “LocalSystem” or “NetworkService” (using SQL Computer Manager) and then restart Sql Agent. 


27. SQL Server Performance Tips?

SQL Server Performance Tips

SQL Server performance tuning can consume a considerable amount of time and effort. The following list is a quick guideline that you should keep in mind when designing and developing SQL Server database applications:

User Defined Functions (UDF)

Refrain from using user defined functions (UDF) in a select statement that may potentially return many records. UDFs are executed as many times as there are rows in a returned result. A query that returns 100,000 rows calls the UDF 100,000 times.

SQL Server table indexes

Create SQL statements that utilize defined table indexes. Using indexes minimizes the amount of table scan which in most cases will be much slower than an index scan.

Multiple disks

The single best performance increase on a SQL Server computer comes from spreading I/O among multiple drives. Adding memory is a close second. Having many smaller drives is better than having one large drive for SQL Server machines. Even though the seek time is faster in larger drives, you will still get a tremendous performance improvement by spreading files, tables, and logs among more than one drive.

Disk controllers

Different disk controllers and drivers use different amounts of CPU time to perform disk I/O. Efficient controllers and drivers use less time, leaving more processing time available for user applications and increasing overall throughput.

SQL Server foreign keys

Ensure that all your tables are linked with foreign keys. foreign keys enhance the performance of queries with joins. Database tables inside each application are naturally related. Islands of tables are rarely needed if your application’s business logic is well defined.

SQL Server primary keys

Ensure that every table has a primary key. if you can’t find a natural set of columns to serve as a primary key, create a new column and make it a primary key on the table.

Processor (CPU)

When you examine processor usage, consider the type of work the instance of SQL Server is performing. If SQL Server is performing a lot of calculations, such as queries involving aggregates or memory-bound queries that require no disk I/O, 100 percent of the processor’s time can be used. If this causes the performance of other applications to suffer, try changing the workload of the queries with aggregates.


28. What is the difference between a scan and a seek?

A scan returns the entire table or index.  A seek efficiently returns rows from one or more ranges of an index based on a predicate.  For example, consider the following query:

select OrderDate from Orders where OrderKey = 2


With a scan, we read each row in the orders table, evaluate the predicate “where OrderKey = 2” and, if the predicate is true (i.e., if the row qualifies), return the row.  In this case, we refer to the predicate as a “residual” predicate.  To maximize performance, whenever possible we evaluate the residual predicate in the scan.  However, if the predicate is too expensive, we may evaluate it in a separate filter iterator.  The residual predicate appears in text showplan with the WHERE keyword or in XML showplan with the <Predicate> tag.

Here is the text showplan (slightly edited for brevity) for this query using a scan:

  |–Table Scan(OBJECT:([ORDERS]), WHERE:([ORDERKEY]=(2)))

The following figure illustrates the scan:

Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table.  Thus, a scan is an efficient strategy if the table is small or if most of the rows qualify for the predicate.  However, if the table is large and if most of the rows do not qualify, we touch many more pages and rows and perform many more I/Os than is necessary.


Going back to the example, if we have an index on OrderKey, a seek may be a better plan.  With a seek, we use the index to navigate directly to those rows that satisfy the predicate.  In this case, we refer to the predicate as a “seek” predicate.  In most cases, we do not need to re-evaluate the seek predicate as a residual predicate; the index ensures that the seek only returns rows that qualify.  The seek predicate appears in the text showplan with the SEEK keyword or in XML showplan with the <SeekPredicates> tag.

Here is the text showplan for the same query using a seek:


The following figure illustrates the seek:

Since a seek only touches rows that qualify and pages that contain these qualifying rows, the cost is proportional to the number of qualifying rows and pages rather than to the total number of rows in the table.  Thus, a seek is generally a more efficient strategy if we have a highly selective seek predicate; that is, if we have a seek predicate that eliminates a large fraction of the table.

A note about showplan

In showplan, we distinguish between scans and seeks as well as between scans on heaps (an object with no index), clustered indexes, and non-clustered indexes.  The following table shows all of the valid combinations:





Table Scan


Clustered Index

Clustered Index Scan

Clustered Index Seek

Non-clustered Index

Index Scan

Index Seek



  1.  Index Scans and Table Scans

      An index scan or table scan is when SQL Server has to scan the data or index pages to find the appropriate records.  A scan is the opposite of a seek, where a seek uses the index to pinpoint the records that are needed to satisfy the query.  The reason you would want to find and fix your scans is because they generally require more I/O and also take longer to process.  This is something you will notice with an application that grows over time.  When it is first released performance is great, but over time as more data is added the index scans take longer and longer to complete.

To find these issues you can start by running Profiler or setting up a server side trace and look for statements that have high read values.  Once you have identified the statements then you can look at the query plan to see if there are scans occurring.

Here is a simple query that we can run. First use Ctrl+M to turn on the actual execution plan and then execute the query.

SELECT * FROM Person.Contact

Here we can see that this query is doing a Clustered Index Scan.  Since this table has a clustered index and there is not a WHERE clause SQL Server scans the entire clustered index to return all rows.  So in this example there is nothing that can be done to improve this query.

In this next example I created a new copy of the Person.Contact table without a clustered index and then ran the query.

SELECT * FROM Person.Contact2

Here we can see that this query is doing a Table Scan, so when a table has a Clustered Index it will do a Clustered Index Scan and when the table does not have a clustered index it will do a Table Scan.  Since this table does not have a clustered index and there is not a WHERE clause SQL Server scans the entire table to return all rows.  So again in this example there is nothing that can be done to improve this query.

In this next example we include a WHERE clause for the query.

SELECT * FROM Person.Contact WHERE LastName = ‘Russell’

Here we can see that we still get the Clustered Index Scan, but this time SQL Server is letting us know there is a missing index.  If you right click on the query plan and select Missing Index Details you will get a new window with a script to create the missing index.

Let’s do the same thing for our Person.Contact2 table.

SELECT * FROM Person.Contact2 WHERE LastName = ‘Russell’

We can see that we still have the Table Scan, but SQL Server doesn’t offer any suggestions on how to fix this.

Another thing you could do is use the Database Engine Tuning Advisor to see if it gives you any suggestions. If I select the query in SSMS, right click and select Analyze Query in Database Engine Tuning Advisor the tools starts up and I can select the options and start the analysis. 

Below is the suggestion this tool provides and we can see that recommends creating a new index, so you can see that using both tools can be beneficial.

Create New Index

So let’s create the recommended index on Person.Contact and run the query again.

USE [AdventureWorks]GOCREATE NONCLUSTERED INDEX [IX_LastName]ON [Person].[Contact] ([LastName])GOSELECT * FROM Person.Contact WHERE LastName = ‘Russell’

Here we can see the query plan has changed and instead of a Clustered Index Scan we now have an Index Seek which is much better.  We can also see that there is now a Key Lookup operation which we will talk about in the next section.


By finding and fixing your Index Scans and Table Scans you can drastically improve performance especially for larger tables.  So take the time to identify where your scans may be occurring and create the necessary indexes to solve the problem.  One thing that you should be aware of is that too many indexes also causes issues, so make sure you keep a balance on how many indexes you create for a particular table.

30. Discovering Unused Indexes?

When SQL Server 2005 was introduced it added Dynamic Management Views (DMVs) that allow you to get additional insight as to what is going on within SQL Server.  One of these areas is the ability to see how indexes are being used.  There are two DMVs that we will discuss.   Note that these views store cumulative data, so when SQL Server is restated the counters go back to zero, so be aware of this when monitoring your index usage.

DMV – sys.dm_db_index_operational_stats

This DMV allows you to see insert, update and delete information for various aspects for an index.  Basically this shows how much effort was used in maintaining the index based on data changes.

If you query the table and return all columns, the output may be confusing.  So the query below focuses on a few key columns.  To learn more about the output for all columns you can check out Books Online.


Below we can see the number of Inserts, Updates and Deletes that occurred for each index, so this shows how much work SQL Server had to do to maintain the index. 

DMV – sys.dm_db_index_usage_stats

This DMV shows you how many times the index was used for user queries.  Again there are several other columns that are returned if you query all columns and you can refer to Books Online for more information.


Here we can see seeks, scans, lookups and updates. 

  • The seeks refer to how many times an index seek occurred for that index.  A seek is the fastest way to access the data, so this is good.
  • The scans refers to how many times an index scan occurred for that index.  A scan is when multiple rows of data had to be searched to find the data.  Scans are something you want to try to avoid.
  • The lookups refer to how many times the query required data to be pulled from the clustered index or the heap(does not have a clustered index).  Lookups are also something you want to try to avoid.
  • The updates refers to how many times the index was updated due to data changes which should correspond to the first query above.

Identifying Unused Indexes

So based on the output above you should focus on the output from the second query.  If you see indexes where there are no seeks, scans or lookups, but there are updates this means that SQL Server has not used the index to satisfy a query but still needs to maintain the index.  Remember that the data from these DMVs is reset when SQL Server is restarted, so make sure you have collected data for a long enough period of time to determine which indexes may be good candidates to be dropped.

31. Datafile is growing fastly. What will be reasons? How you resolve it/

Data may be refreshing that is wt data file growing fastly

Shrink the file

That is only when Datafile size below 80%

Otherwise add one more file different location


32. SQL Server utilizing 100% CPU


-I/O utilization(Waiting for I/O process)

-reasons could be due to high number of compilation or recompilation, bad query, poor cursor.

– Please see SQL Statistics: Batch Requests/sec, SQL Statistics: SQL Compilations/sec, SQL Statistics: SQL Recompilations/sec



  1. While taking full backup can we take differential backup?
    It will allow differential backup operation. But it will success after
    completion of Full Back up only. Yes


  1. While taking full backup can we take transaction log backup?



  1. What is the differential backup

A differential backup is based on the most recent, previous full data backup. A differential backup captures only the data that has changed since that full backup. The full backup upon which a differential backup is based is known as the base of the differential. Full backups, except for copy-only backups, can serve as the base for a series of differential backups, including database backups, partial backups, and file backups. The base backup for a file differential backup can be contained within a full backup, a file backup, or a partial backup.



  • Creating a differential backups can be very fast compared to creating a full backup. A differential backup records only the data that has changed since the full backup upon the differential backup is based. This facilitates taking frequent data backups, which decrease the risk of data loss. However, before you restore a differential backup, you must restore its base. Therefore restoring from a differential backup will necessarily take more steps and time than restoring from a full backup because two backup files are required.
  • Differential database backups are especially useful if a subset of a database is modified more frequently than the rest of the database. In these cases, differential database backups enable you back up frequently without the overhead of full database backups.
  • Under the full recovery model, using differential backups can reduce the number of log backups that you have to restore.



  1. What is Fragmentation? How to detect fragmentation and how to eliminate it?

Storing data non-contiguously on disk is known as fragmentation. Before learning to eliminate fragmentation, you should have a clear understanding of the types of fragmentation. We can classify fragmentation into two types:

  • Internal Fragmentation: When records are stored non-contiguously inside the page, then it is called internal fragmentation. In other words, internal fragmentation is said to occur if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. As these modifications are not equally distributed among the rows of the table and indexes, the fullness of each page can vary over time. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.
  • External Fragmentation: When on disk, the physical storage of pages and extents is not contiguous. When the extents of a table are not physically stored contiguously on disk, switching from one extent to another causes higher disk rotations, and this is called Extent Fragmentation.


Index pages also maintain a logical order of pages inside the extent. Every index page is linked with previous and next page in the logical order of column data. However, because of Page Split, the pages turn into out-of-order pages. An out-of-order page is a page for which the next physical page allocated to the index is not the page pointed to by the next-pagepointer in the current leaf page. This is called Logical Fragmentation.

Ideal non-fragmented pages are given below:

Statistics for table scan are as follows:

  • Page read requests: 2
  • Extent switches: 0
  • Disk space used by table: 16 KB
  • avg_fragmentation_in_percent: 0
  • avg_page_space_used_in_percent: 100

Following are fragmented pages:

In this case, the statistics for table scan are as follows:

  • Page read requests: 6
  • Extent switches: 5
  • Disk space used by table: 48 KB
  • avg_fragmentation_in_percent > 80
  • avg_page_space_used_in_percent: 33

How to detect Fragmentation: We can get both types of fragmentation using the DMV: sys.dm_db_index_physical_stats. For the screenshot given below, the query is as follows:

SELECT OBJECT_NAME(OBJECT_ID), index_id,index_type_desc,index_level,
FROM sys.dm_db_index_physical_stats
(DB_ID(N’AdventureWorksLT’), NULL, NULL, NULL , ‘SAMPLED’)
ORDER BY avg_fragmentation_in_percent DESC

Along with other information, there are two important columns that for detecting fragmentation, which are as follows:

  • avg_fragmentation_in_percent: This is a percentage value that represents external fragmentation. For a clustered table and leaf level of index pages, this is Logical fragmentation, while for heap, this is Extent fragmentation. The lower this value, the better it is. If this value is higher than 10%, some corrective action should be taken.
  • avg_page_space_used_in_percent: This is an average percentage use of pages that represents to internal fragmentation. Higher the value, the better it is. If this value is lower than 75%, some corrective action should be taken.

Reducing fragmentation:

  • Reducing Fragmentation in a Heap: To reduce the fragmentation of a heap, create a clustered index on the table. Creating the clustered index, rearrange the records in an order, and then place the pages contiguously on disk.
  • Reducing Fragmentation in an Index: There are three choices for reducing fragmentation, and we can choose one according to the percentage of fragmentation:
  • If avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEXREORGANIZE: This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level pages of the index in a logical order. As this is an online operation, the index is available while the statement is running.
  • If avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD: This is replacement for DBCC DBREINDEX to rebuild the index online or offline. In such case, we can also use the drop and re-create index method.
  • (Update: Please note this option is strongly NOT recommended)Drop and re-create the clustered index: Re-creating a clustered index redistributes the data and results in full data pages. The level of fullness can be configured by using the FILLFACTOR option in CREATE INDEX.


  1. What is TUF file?

The transaction undo file contains modifications that were not committed on the source database but were in progress when the transaction log was backed up AND when the log was restored to another database, you left the database in a state that allowed addition transaction log backups to be restored to it (at some point in the future. When another transaction log is restored, SQL Server uses data from the undo file and the transaction log to continue restoring the incomplete transactions (assuming that they are were completed in the next transaction log file). Following the restore, the undo file will be re-written with any transactions that, at that point, are incomplete.


TUF file is known as “Transaction Undo File”

This file is created when LogShipping is configured in SQL Server.

This file consists of list of uncommitted transactions while backup is going on the primary server in Log Shipping.

If this file is deleted you have to reconfigure the LogShipping as the secondary server.

This file is located in the path where transaction log files are saved.


  1. Difference between Re Indexing and Re Organizing?


Index Rebuild : This process drops the existing Index and Recreates the index.
USE AdventureWorks;GOALTER INDEX ALL ON Production.Product REBUILD
Index Reorganize : This process physically reorganizes the leaf nodes of the index.
USE AdventureWorks;GOALTER INDEX ALL ON Production.Product REORGANIZE

1) Index rebuild works by re-creating the index internally again and when that has been achieved, it drops the existing index where as index reorganize is the process of physically re-organizing the leaf nodes of the index.

2) During the index rebuild process, the statistics are also re-computed – same as when a new index gets created. Reorganize on the other hand does not update the statistics. Reorganize essentially just swaps one page with another and thus does not require free space for this operation like rebuild does. Intact, reorganize can free up some pages as it does the reorg in two phases – compaction and defrag. A reorganize can remove almost all of the logical fragmentation but it cannot necessarily fix extent fragmentation in which the previous and the next extents are physically contiguous.

3) Another point to note is that an index (clustered or non-clustered) cannot be built online if it contains LOB data (text, ntext, varchar(max), nvarchar(max), varbinary(max), image and xml data type columns). The ALTER INDEX…REORGANIZE command shown above is the same as DBCC INDEXDEFRAG but there is one difference. ALTER INDEX…REORGANIZE has some additional features like large objects compaction (LOB_COMPACTION). And this is an online operation.

4) Regarding partitions of an index, if an index has multiple partitions, then you cannot rebuild a single partition online. You can reorganize a single index partition online. If you want to rebuild an index with multiple partitions in an online environment, you need to rebuild the entire index which means rebuilding all the partitions.




(SQL 2005)


Rebuild All Indexes


Need to run for each index.  In SQL 2005 using the ALTER INDEX you can specify ALL indexes.

Online Operation

No, users will be locked out until complete. In SQL Server 2005 Enterprise Edition you can build indexes online.

Yes, users can still use the table

Transaction Log Impact

Depends on the recovery model of the database

Fully logged operation regardless of the database recovery model

Transaction Log Impact

If set to the full recovery model can consume a lot of space for operation to complete.

If index is very fragmented this could potentially take up more transaction log space.

Can run in parallel (uses multiple threads)





  • REBUILD locks the table for the whole operation period (which may be hours and days if the table is large).
  • REBUILD is also a DML operation which changes the system tables, affects statistics, enables disabled indexes etc.
  • Rebuild it dropping the current indexes and recreating new ones.
  • Rebuilding of the index is changing the whole B-tree, recreating the index.
  • Rebuilding of an index takes more resources, produce locks and slowing performance (if you choose to keep table online). So, you need to find right time for that process.
  • rebuild index rebuild one or more index for a table in the specified database


  • Reorganizing is like putting the house in order with what u already have.
  • REORGANIZE doesn’t lock the table.
  • REORGANIZE is a pure cleanup operation which leaves all system state as is.
  • “Reorganize index” is a process of cleaning, organizing, and defragmenting of “leaf level” of the B-tree (really, data pages).
  • Reorganized – defragment clustered and secondary indexes of the specified table


  1. How to find long running queries?


TEXT QueryName,
s.execution_count AS ExecutionCount,
s.max_elapsed_time AS MaxElapsedTime,
ISNULL(s.total_elapsed_time s.execution_count0ASAvgElapsedTime,
s.creation_time AS LogCreatedOn,
ISNULL(s.execution_count DATEDIFF(ss.creation_timeGETDATE()),0AS FrequencyPerSec
FROM sys.dm_exec_query_stats s
CROSS APPLY sys.dm_exec_sql_texts.sql_handle t
s.max_elapsed_time DESC


  1. How to transfer Logins?

With revlogin script or SSIS packages.

  1. What is permission to read only operation on jobs, views and sp?


Managed to get the ability to view all logins.

GRANT VIEW ANY DEFINITION TO <Read-Only-User-Who-Sees-Everything>


Managed to get the ability to view all jobs on SQL Server Agent:

EXEC sp_addrolemember ‘SQLAgentReaderRole’, ‘<ReadOnly-User-Who-Sees-Everything>’


10. Can u tell us something about how to find which sql table is having the lock and of what type. also please tell us how to remove a lock from a locked table?

In SQL Server 2000 (Enterprise Manager)

1. Expand server – management-currentActivity-expand
locks/processid and you will be able to see all the locks related information.

2.Expand server – management-currentActivity-expand Locks/object you can see locks by object information.

In SQL Server 2005 (SSMS, object Explorer)
Expand-server-management-double click Activity Monitor.
on left side you have three options to choose from, select those options and you can see all the locks related information.

run this stored procedure in the database.

1. sp_lock

to know the running process in the sql server, run this query,

2. select * from sysprocesses ( in sql server 2000)
3. select * from sys.sysprocesses ( in sql server 2005)

4. sp_who
5. sp_who2 will also give you some good information.

To work around the locks, you can run profiler to check which query is is creating a lock and if that is necessary.

Types of locks on object level, ( general idea)

Database : Database.
Extent : Contiguous group of eight data pages or index pages.
Key: Row lock within an index.
Page: 8-kilobyte (KB) data page or index page.
RID :Row ID. Used to lock a single row within a table.
Table: Entire table, including all data and indexes.

Types of locks;
Shared (S) – more than one Query can access the object.
Exclusive lock (X) – only one Query can access the object.
Update lock (U)
Intent share (IS)
Intent Exclusive (IX)

Just to give you a brief idea about locks, We have something called as transaction levels in sql server databases.


level 0 is the lowest level isloation level, if your database is set in this isolation level, no query will lock any resources,Under this level, there will be no locks on the database, not even shared locks.

This data will also read uncommitted data. Data which you have not comitted, you can still read that data.

level1 is the default isolation level of the database.
Under this category you will not be able to read uncomitted data, this is also called as dirty data. Under this we will have shared locks.

As the level increases the locks also increases. The highest is the serializable.

To make you understand in detail, lets see an example of what is committed data and what is uncomitted data.

use pubs
create table example1 ( eid int, ename varchar(10))

begin tran T1
insert into example1 values ( 1, ‘example’)

select * from example1 — this is uncomitted data.

The above is uncomitted transaction, because you started the transaction with a begin, you have to commit the transaction, untill then the transaction will not be uncommitted.

to commit the same transaction

commit tran T1

select * from example1 — this is committed data.

To check what is the current isolation level of your database, run this command,

Dbcc useroptions — check for isolation level.

If you dont want your query to put locks on objects you might want to use something like this,

select * from example1_1 with (nolock)

This will not keep any lock, not even a shared lock on the table.


1. Explain SQL Architecture

A:SQL server is a collection of 4 servers.

Database Server (),

Integration Server (ETL Process, to implement import and export activities),

Analysis Server (to build data warehouse),

Reporting Server (It is interface used to present data or info)

A Basic select Statement Life Cycle Summary



Figure 1-5 shows the whole life cycle of a SELECT query, described here:


1. The SQL Server Network Interface (SNI) on the client established a connection to the SNI on the SQL Server using a network protocol such as TCP/IP. It then created a connection to a TDS endpoint over the TCP/IP connection and sent the SELECT statement to SQL Server as a TDS message.

2. The SNI on the SQL Server unpacked the TDS message, read the SELECT statement, and passed a “SQL Command” to the Command Parser.

3. The Command Parser checked the plan cache in the buffer pool for an existing, usable query plan. When it didn’t fi nd one, it created a query tree based on the SELECT statement and passed it to the Optimizer to generate a query plan.

4. The Optimizer generated a “zero cost” or “trivial” plan in the pre-optimization phase because the statement was so simple. The query plan created was then passed to the Query Executor for execution.

5. At execution time, the Query Executor determined that data needed to be read to complete the query plan so it passed the request to the Access Methods in the Storage Engine via an OLE DB interface.

6. The Access Methods needed to read a page from the database to complete the request from the Query Executor and asked the Buffer Manager to provision the data page.

7. The Buffer Manager checked the data cache to see if it already had the page in cache. It wasn’t in cache so it pulled the page from disk, put it in cache, and passed it back to the Access Methods.

8. Finally, the Access Methods passed the result set back to the Relational Engine to send to the client.


2. Explain Storage Engine



3. Unable to start SQL Server, what is the reason, how to find issue?


Check Physical Error Log path : Open Configuration Manager > RC properties of Instance >Advanced  check Startup Parameter


4. Log file is corrupted (there is no backups, how u will get database?


SELECT name, state, state_desc FROM sys.databases WHERE  name=’ABCD’

Output for state_desc was RECOVERY_PENDING.


I instructed him to set database to the emergency mode.



Then, set the database to single use mode and started the rollback.



Then rebuilt the log by ,



Then executed the initial statement and verified that database is ONLINE.


Below execution is to make that every use can log into the database.



With this, users were able to login to the database without any issues.


5. How to rebuild Log File?

A:For SQL Server 2005 and 2008:

Here are the steps:

In the management studio query window.

1. Get the DBid for the database to which you are going to rebuild the log.

Select * from sys.sysdatabases


2. Pass the dbid and get logical name and the physical name from the sysaltfiles. In my case I have randomly choosen ‘8’.

select name,filename from sys.sysaltfiles where dbid=’8′


3. To rebuild we need to put the database in emergency.

alter database mydatabase set emergency


4. To rebuild we need to put the database in single user so that no one access it.

alter database mydatabase set single_user


5. stop sql server and remove the old file (the location of the old file is give by filename output of the query in step 2)and keep it separately.


6. Once the old file is backup in different location, give the same name, filename returned by step 2 in the below command and run it.

alter database mydatabase rebuild log on



7. Finally bring the database online.

alter database mydatabase set online

mydatabase is the name of the database for which you are rebuilding the log.


6. Tell me something SQL OS?


To establish connection between Windows and SQL


7. In my SQL Server 3 instance is there in that One instance is consuming more CPU, How to find?


Right-click the instance name / reports / standard / top sessions

Before discussing how to do this, we should discuss these terms:

  • SPID is the SQL Server Process ID number and is assigned by SQL Server to each new connection. It starts with one and is globally unique. SPID 1 through 50 are reserved for system uses and are not used for any user connections.
  • KPID is the kernel-process ID. Under SQL Server for Windows this is the thread ID number, also known as “ID Thread,” and is assigned by Windows when the thread is created. The Thread ID number is a system-wide identifier that uniquely identifies the thread. KPID is visible by querying the KPID column of master..sysprocesses. It is only filled in for spid numbers four and higher. You can also get KPID/ID Thread from Windows Perfmon using the “Thread” object and the “ID Thread” counter.

I am sure you are familiar with the SPID value, but have you ever noticed KPID? Probably not unless you queried the sysprocesses view directly. Let’s look at example of how this works.


Step 1

If we look at Task Manager on the server we can see the CPU usage. So the first step is to verify that the SQL Server process (sqlservr), and not some other process, is responsible for the excessive CPU use. Remember, the “sqlservr” process is made up of many different threads, each one generally (but not always) representing a specific user connection. In the screenshot below we see the sqlservr process is taking approximately 70 percent of CPU.

Now that we know SQL Server is the main culprit and responsible for the high CPU, the next step is to find out which SQL Server process is causing the high CPU.
Step 2
Now we need to find out what SQL Server thread is causing the high CPU. Once we have this ID Thread we can correlate that ID thread (KPID) to the SPID in SQL Server.
We will use Performance Monitor to get this info. Type perfmon in a Windows CMD prompt or launch from Control Panel.
Click on Add counters and select the “Thread” object in the drop down.
Select these counters at the same time:

  • % Processor Time
  • ID Thread
  • Thread State
  • Thread Wait Reason

In the right pane, you will see multiple instances from multiple applications, which are running on that server. Since we are looking for “sqlservr” select all of the instances that begin with “sqlservr” from the list box as shown below and click Add. You can add other counters as needed and will see below that I added a few additional counters.

Step 3
Press (Ctrl+R) or click on the view Report tab to change from graphical to report view as shown below. Here you should be able to find which ID Thread is eating up your CPU resources by using the scroll bar to see all instances.

Below we have identified which thread is causing the problem. Here you can see ID Thread 30 is taking 46.841% of the total CPU. You can also find the ID Thread which is 872. Now that we know the ID Thread, we can run a query in SSMS to find the actual query.

Step 4
Our last step is to correlate the Thread ID (KPID) identified in the last step to the SPID. To do this, run the following query in Query analyzer:

SELECT spid, kpid, dbid, cpu, memusage FROM sysprocesses WHERE kpid=872


Step 5
From the query above we can see SPID 71 is causing the issue. To find how many threads and open transactions this is running we can run this query.

SELECT spid, kpid, status, cpu, memusage, open_tran, dbid FROM sysprocesses WHERE spid=71


Step 6
To get the exact query that is running, we can run DBCC INPUTBUFFER using the SPID. The below output shows us that a backup job is causing our CPU issues on our server.


8. How debug MSI Error in installation?




9. Difference between Checkpoint and Lazy writer?






Checkpoint is used by sql engine to keep database recovery time in check

Lazy writer is used by SQL engine only to make sure there is enough memory left in sql buffer pool to accommodate new pages


Check point always mark entry in T-log before it executes either sql engine or manually

Lazy writer doesn’t mark any entry in T-log


Checkpoint only check if page is dirty or not

Lazy writer clears any page from memory when it satisfies all of 3 conditions.
1. Memory is required by any object and available memory is full
2. Cost factor of page is zero
3. Page is not currently reference by any connection


Checkpoint is affected by two parameters
1. Checkpoint duration: is how long the checkpoint can run for.
2. Recovery interval: affects how often it runs.

Lazy writer is affected by
1. Memory pressure
2. Reference counter of page in memory


Flush dirty pages to Disk 

Flush dirty pages to disk. 


Flush only Data pages to disk 

Check for available memory and removed Buffer pool (execution plan/compile plan/ Data pages /Memory objects)


Default, Occurs approximately every 1 minute, run as per defined frequency

Occurs depending upon memory pressure and resource availability


Can be managed with sp_confige -recovery interval option

It is lazy, Sql server manages by its own. 


Does not check the memory pressure 

Monitor the memory pressure and try maintain the available free memory. 


crash recovery process will be fast to read log as data file is updated.

No role in recovery 


Occurs for any DDL statement 

Occurs per requirement


Occurs before Backup/Detach command 

Occurs per requirement


Depends upon the configuration setting, we can control. 

Works on Least recent used pages and removed unused plans first, no user control. 


for simple recovery it flush the tlog file after 70% full. 

No effect on recovery model.


can manually /Forcefully run command “Checkpoint” 

No command for Lazy Writer 


Very Less performance impact 

No performance impact


Members of the SYSADMIN, DB_OWNER and DB_BACKUPOPERATOR can execute checkpoint maually

Not Applied


To get checkpoint entry in error log DBCC TRACEON(3502, -1)

Not Applied


Checkpoint is affected by Database recovery model

Lazy writer doesn’t get impacted with recovery model of database


Auto frequency can be controlled using recovery interval in sp_configure

Works only @ memory pressure , It uses clock algorithm for cleaning buffer cache


It keeps no. of dirty pages in memory to minimum

It helps to reduce paging


10. What is AWE?


it can be found at SQL Server Level properties. AWE is properly explained in BOL so we will just have our simple explanation. 

Address Windowing Extensions API is commonly known as AWE.  AWE is used by SQL Server when it has to support very large amounts of physical memory. AWE feature is only available in SQL Server Enterprise, Standard, and Developer editions with of SQL Server 32 bit version.

Microsoft Windows 2000/2003 server supports maximum of 64GB memory. If we have installed SQL Server 32 bit version which can support maximum of 3 GB memory on Windows 2000/2003, we can enable AWE feature to use available physical memory of server to improve performance of SQL Server. In simple words, AWE provides memory management functions which lets windows to allow more than 3GB memory to standard 32 bit application.


11. What is PAE?



12. Tell me list of Page’s?



18. Tell me types of Backup’s?



19. What is BCM page?



20. Log file is full? what we have to do, How to find which is cause?

    (no space to take backup, unable to shrink, to add another file no space)..



21. Master database is corrupted?



22. What is back process for the DBCC?


When you execute one of these DBCC commands, the Database Engine creates a database snapshot and brings it to a transactionally consistent state. The DBCC command then runs the checks against this snapshot. After the DBCC command is completed, this snapshot is dropped.

Sometimes an internal database snapshot is not required or cannot be created. When this occurs, the DBCC command executes against the actual database. If the database is online, the DBCC command uses table-locking to ensure the consistency of the objects that it is checking. This behavior is the same as if the WITH TABLOCK option were specified.

An internal database snapshot is not created when a DBCC command is executed:

  • Against master, and the instance of SQL Server is running in single-user mode.
  • Against a database other than master, but the database has been put in single-user mode by using the ALTER DATABASE statement.
  • Against a read-only database.
  • Against a database that has been set in emergency mode by using the ALTER DATABASE statement.
  • Against tempdb. In this case, a database snapshot cannot be created because of internal restrictions.
  • Using the WITH TABLOCK option. In this case, DBCC honors the request by not creating a database snapshot.


23. What is stolen page?


Stolen pages are used store miscellaneous pages like proc cache etc,


33. What is the boot page?


In a database the primary file holds the critical information in a page called the boot page. It is the 9th page of the primary file in the PRIMARY file group


34. What is the difference between 32-bit and 64-bit versions of Windows?

The terms 32-bit and 64-bit refer to the way a computer’s processor (also called a CPU), handles information. The 64-bit version of Windows handles large amounts of random access memory (RAM) more effectively than a 32-bit system.

35. What is difference between 32bit and 64 bit


It is the amount of memory that 2008 can use. In a 32-bit version it’s limited to only a few gigabytes where the 64bit version can use much much more. In the 32 bit version, the server has to access the hardrive for database information much much more, the 64-bit OS and enough memory allows the server to hold much more of the database in memory, making the queries faster.


Mainly it starts on a hardware level by the way the CPU) handles information. For example, a 32 bit CPU can only handle 2^32 bist, which limits it to 2Gb of RAM. The 64-bit can handle 2^64, which is a very large amounts of memory. You can easily conclude that any hardware improvement will positively affect every software that runs over it and SQL Server is no exception.


Main difference I see in a practical sense is memory caps. 32 bit is limited to the old 2 GB threshhold without AWE and you have to use AWE (which to me is like loading device drivers high) to access physical memory greater than the OS limit. Whereas in 64bit SQL, you can pretty much add memory in the current environment limitlessly (they aren’t making Windows servers that can hit the 64 bit max for memory) and SQL will use it natively.


60. What are the basic functions for master, msdb, model, tempdb databases?

  • The Master database contains catalog and data for all databases of the SQL Server instance and it holds the engine together. Because SQL Server cannot start if the master database is not working.
  • The msdb database contains data of database backups, SQL Agent, DTS packages, SQL Server jobs, and log shipping.
  • The tempdb contains temporary objects like global and local temporary tables and stored procedures.
  • The model is a template database which is used for creating a new user database.

48. What is the function of SQL Server Agent Windows service?

– It is a Windows service which handles the tasks scheduled within the SQL Server environment. These tasks are also called as job and are stored with in SQL server. The jobs may run through a trigger, a predefined schedule or on demand.

– This service is very useful in determining why a particular job did not run as intended.


41. What are the steps you will take, if you are tasked with securing an SQL Server?


Again this is another open ended question. Here are some things you could talk about:


  • Preferring NT authentication,
  • using server,
  • databse and application roles to control access to the data,
  • securing the physical database files using NTFS permissions,
  • using an unguessable SA password,
  • restricting physical access to the SQL Server,
  • renaming the Administrator account on the SQL Server computer,
  • disabling the Guest account,
  • enabling auditing,
  • using multiprotocol encryption,
  • setting up SSL,
  • setting up firewalls,
  • isolating SQL Server from the web server etc.


42. Name five different tools which can be used for performance tuning and their associated purpose.

  • Performance Monitor\System Monitor – Tool to capture macro level performance metrics.
  • Profiler – Tool to capture micro level performance metrics based on the statements issued by a login, against a database or from host name.
  • Server Side Trace – System objects to write the detailed statement metrics to a table or file, similar to Profiler.
  • Dynamic Management Views and Functions – SQL Server objects with low level metrics to provide insight into a specific portion of SQL Server i.e. the database engine, query plans, Service Broker, etc.
  • Management Studio’s Built-In Performance Reports – Ability to capture point in time metrics as pre-defined by Microsoft.
  • Custom scripts – Custom scripts can be developed to monitor performance, determine IO usage, monitor fragmentation, etc. all in an effort to improve performance.
  • Third party applications – Performance monitoring and tuning applications from vendors in the SQL Server community.

13. Log Shipping Configuration?



To setup a log-shipping you must have sysadmin rights on the server.

Minimum Requirements

  1. SQL Server 2005 or later
  2. Standard, Workgroup or Enterprise editions must be installed on all server instances involved in log shipping.
  3. The servers involved in log shipping should have the same case sensitivity settings.
  4. The database must use the full recovery or bulk-logged recovery model
  5. A shared folder for copying T-Log backup files
  6. SQL Server Agent Service must be configured properly

In addition, you should use the same version of SQL Server on both ends. It is possible to Log Ship from SQL 2005 to SQL 2008, but you can not do it the opposite way. Also, since Log Shipping will be primarly used for failover if you have the same versions on each end and there is a need to failover you at least know you are running the same version of SQL Server.

Steps to Configure Log-Shipping:

  1.  Make sure your database is in full or bulk-logged recovery model. You can change the database recovery model using the below query. You can check the database recovery model by querying sys.databases 

SELECT name, recovery_model_desc FROM sys.databases WHERE name = ‘jugal’

USE [master]

2. On the primary server, right click on the database in SSMS and select Properties. Then select the Transaction Log Shipping Page. Check the “Enable this as primary database in a log shipping configuration” check box.

3. The next step is to configure and schedule a transaction log backup. Click on Backup Settings… to do this.

If you are creating backups on a network share enter the network path or for the local machine you can specify the local folder path. The backup compression feature was introduced in SQL Server 2008 Enterprise edition. While configuring log shipping, we can control the backup compression behavior of log backups by specifying the compression option. When this step is completed it will create the backup job on the Primary Server.

  1.  In this step we will configure the secondary instance and database. Click on the Add… button to configure the Secondary Server instance and database. You can add multiple servers if you want to setup one to many server log-shipping.

When you click the Add… button it will take you to the below screen where you have to configure the Secondary Server and database. Click on the Connect… button to connect to the secondary server. Once you connect to the secondary server you can access the three tabs as shown below.


Initialize Secondary Database tab

In this step you can specify how to create the data on the secondary server. You have three options: create a backup and restore it, use an existing backup and restore or do nothing because you have manually restored the database and have put it into the correct state to receive additional backups.


Copy Files Tab

In this tab you have to specify the path of the Destination Shared Folder where the Log Shipping Copy job will copy the T-Log backup files. This step will create the Copy job on the secondary server.


Restore Transaction Log Tab

Here you have to specify the database restoring state information and restore schedule. This will create the restore job on the secondary server.

5. In this step we will configure Log Shipping Monitoring which will notify us in case of any failure. Please note Log Shipping monitoring configuration is optional.

Click on Settings… button which will take you to the “Log Shipping Monitor Settings” screen. Click on Connect …button to setup a monitor server. Monitoring can be done from the source server, target server or a separate SQL Server instance. We can configure alerts on source / destination server if respective jobs fail. Lastly we can also configure how long job history records are retained in the MSDB database. Please note that you cannot add a monitor instance once log shipping is configured.


6. Click on the OK button to finish the Log Shipping configuration and it will show you the below screen.


14. What is TUF?


  • TOUF file is known as transaction undo file
  • This file is created when logshipping is configured in SQL Server
  • This is consists of list of uncommitted transactions while backup is going on the primary server in logshipping.
  • if this is deleted you have to reconfigure the logshipping as the secondary server.
  • This  file is located in the path where transaction log files are saved.


.TUF file is the Transaction Undo File, which is created when performing log shipping to a server in Standby mode.

When the database is in Standby mode the database recovery is done when the log is restored; and this mode also creates a file on destination server with .TUF extension which is the transaction undo file.

This file contains information on all the modifications performed at the time backup is taken.

The file plays a important role in Standby mode… the reason being very obvious while restoring the log backup all uncommited transactions are recorded to the undo file with only commited transactions written to disk which enables the users to read the database. So when we restore next transaction log backup; SQL server will fetch all the uncommited transactions from undo file and check with the new transaction log backup whether commited or not.

If found to be commited the transactions will be written to disk else it will be stored in undo file until it gets commited or rolledback.


15. Mirroring Configuration?



16. Types of replications?



17. Replication Configuration?



24. What is Quorum?


Quorum is the cluster’s configuration file.This file (quorum.log) resides in the the quorum disk (one disk from shared disk array).Quorum is the main interpreter between all nodes. It stores latest cluster configuration and resource data. This helps the other nodes to take ownership when one node goes down.


25. Whatis isalive and look alive?


Looks Alive check:Looks alive check is a basic resource health check to verify that the service(SQL service in our context) is running properly.To perform this , cluster service queries the windows service control manager to check the status of the service.By default looks alive check will happen in every five seconds.

Is Alive check: An exhaustive check to verify that a resource is running properly. If this check fails, the resource is moved offline and the failover process is triggered. During the Is alive check  the cluster service connects to the SQL server instance and execute select @@SERVERNAME.It will check only the SQL server instance availability and does not check the availability of user databases.


You can specify two polling intervals and a timeout value for resources. The polling intervals affect how often the MSCS Resource Monitor checks that the resource is available and operating. There are two levels of polling; they are known in Cluster Administrator as “Looks Alive” and “Is Alive.” These values are named for the calls that the Resource Monitor makes to the resource to perform the polling. In “Looks Alive” polling, MSCS performs a cursory check to determine if the resource is available and running. In “Is Alive” polling, MSCS performs a more thorough check to determine if the resource is fully operational. The timeout value specifies how many seconds MSCS waits before it considers the resource failed.


26. Explain Deadlock and Blocking?



27. How to find Deadlocks?



28. Explain isolation levels?



29. What is difference between read committed and snapshot isolation level?



30. Difference between clustered and non clustered index?


A clustered index is a special type of index that reorders the way records in the table are physically stored.

Therefore table can have only one clustered index and this is usually made on the primary key. The leaf nodes of a clustered index contain the data pages.


A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

There can be as many as 249 nonclustered index per table


Cluster Index

1 A cluster index is a form of tables which consist of column and rows.
2 Cluster index exists on the physical level
3 It sorts the data at physical level
4 It works for the complete table
5 There is a whole table in form of sorted data 6 A table can contain only one cluster index

Non Cluster Index

1 A non cluster index is in the form of a report about the tables.
2 They are not created on the physical level but at the logical level
3 It does not sort the data at physical level
4 A table has 255 non clustered indexes
5 A table has many non clustered indexes.
6 It work on the order of data

Clustered Index:– Clustered index physically stored the data of the table in the order of the keys values and the data is resorted every time whenever a new value is inserted or a value is updated in the column on which it is defined.

In a table only 1clustered index is possible.

In a clustered Index, the leaf node contains the actual data. 

Non Clustered Index:- In case of Non clustered index it create a sperate list of key values (or created a table of pointers) which points towards the location of the datain the data pages.

In a table 249 non clustered index is possible.

In a non clustered Index, the leaf node contains the pointer to the data rows of the table.


40. What is lock escalation?


Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL Server 6.5, but from SQL Server 7.0 onwards it’s dynamically managed by SQL Server.


41. What are the steps you will take to improve performance of a poor performing query?


This is a very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables. Some of the tools/ways that help you troubleshooting performance problems are: SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer.



43. Explain three different approaches to capture a query plan.

  • Graphical Query Plan
  • sys.dm_exec_query_optimizer_info
  • sys.dm_exec_query_plan
  • sys.dm_exec_query_stats
  1.  Name three different options to capture the input (code) for a query in SQL Server.
  • fn_get_sql
  • sys.dm_exec_sql_text

45. Explain a SQL Server deadlock, how a deadlock can be identified, how it is a performance problem and some techniques to correct deadlocks.?

  • A deadlock is a situation where 2 spids have data locked and cannot release their lock until the opposing spid releases their lock. Depending on the severity of the deadlock, meaning the amount of data that is locked and the number of spids that are trying to access the same data, an entire chain of spids can have locks and cause a number of deadlocks, resulting in a performance issue.
  • Deadlocks can be identified by Profiler in either textual, graphical or XML format.
  • Deadlocks are a performance problem because they can prevent 2 or more processes from being able to process data.  A deadlock chain can occur and impact hundreds of spids based on the data access patterns, number of users, object dependencies, etc.
  • Deadlocks could require a database design change, T-SQL coding change to access the objects in the same order, separating reporting and OLTP applications, including NOLOCK statements in SELECT queries that can accept dirty data, etc.

46. Explain query execution plan?

– The optimizer available in SQL Server optimizes the code to be effectively executed.
– A query execution plan shows how this optimizer would run the query.
– Query execution plan can be viewed by :
– Using the Show Execution Plan option available in Query Analyzer,
– Displaying Estimated Execution Plan on the query dropdown menu,
– Use the SET SHOWPLAN_TEXT ON command before running a query and capturing the execution plan event in a SQL Server Profiler trace.

47. When is the UPDATE_STATISTICS command used?

– When the processing of large data is done, this command is used.
– Whenever large number of deletions, modification or copy takes place into the tables, the indexes need to be updated to take care of these changes. UPDATE_STATISTICS performs this job.

49. What do you mean by an execution plan? Why is it used? How would you view it?

a.) An execution plan can be called as a road map that graphically or textually shows the data retrieval methods which have been chosen by the SQL Server query optimizer, for a stored procedure or ad- hoc query.

b.) It is used because it is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure.

c.) There exists an option called “Show Execution Plan” in Query Analyzer. If this option is turned on, it will display query execution plan in separate window when the query is run again.

  1.  What is Live Lock?


A Live lock is one, where a request for exclusive lock is denied continuously because a series of overlapping shared locks keeps on interfering each other and to adapt from each other they keep on changing the status which further prevents them to complete the task. In SQL Server Live Lock occurs when read transactions are applied on table which prevents write transaction to wait indefinitely. This is different then deadlock as in deadlock both the processes wait on each other.

A human example of live lock would be two people who meet face-to-face in a corridor and each moves aside to let the other pass, but they end up moving from side to side without making any progress because they always move the same way at the same time and never cross each other. This is good example of live lock.


40. What is difference between Index Seek vs. Index Scan?

  • Index Seek and Index Scan are operation for query tuning in execution plans.
  • Table Scan scans every record of the table. So the cost of proportional is the number of rows of that table.
  • The Index Scan is preferred only when the table is small.
  • Index Seek only touches the rows which qualify and the pages that contain that qualifying rows, so the cost of proportional is the number of qualifying rows and pages instead of the number of rows in the table.
  • Index seek is preferred for highly sensitive queries.

41 What are Sparse columns?

  • Sparse column is a type of column which is used to optimize storage for null values.
  • When a column there is big number of null then by defining that column as spars column we can save a large amount of disk space.
  • The drawback of sparse column is that it requires more space for the non null values. When we define a column as sparse it requires additional 4 Byte for not null values.
  • For example, a DATETIME column in a non-sparse column required 8 bytes of storage whereas if we define that column as a sparse column then it will require 12 bytes.
  • It is not possible to set ROWGUIDCOL and IDENTITY properties in sparse column.

42. What is SQL Profiler?

-SQL Server provides a graphical tool which helps system administrators to monitor T-SQL statements of Database Engine.

-SQL Profiler can capture and store data related to every event to a file or a table.

-SQL Server Profiler can be used

1. To create a trace 
2. To store the trace results in a table.
3. To watch the trace results when the trace runs 
4. To replay the trace results 
5. To start, stop, pause, and modify the trace results

43. What is Fill factor?

  • The ‘fill factor’ option indicate how full SQL Server will create each index page.
  • When the index page doesn’t have free space for inserting a new row, SQL Server will create new index page and transfer some rows from the previous index page to the new index page. This process is called page split.
  • If we want to reduce the number of page splits then we can use Fill factor option. Using Fill factor SQL will reserve some space on each index page.
  • The fill factor is a value from 1 through 100 that indicates the percentage of the index page to be left empty. The default value for fill factor is 0.
  • If the table contains the data which is not changed frequently then we can set the fill factor option to 100. When the table’s data is modified frequently, we can set the fill factor option to 80% or as we want.

44. What are Magic tables in SQL Server?

-In SQL Server there are two system tables “Inserted” and “Deleted” called Magic tables.

-These are not the physical tables but the virtual tables generally used with the triggers to retrieve the inserted, deleted or updated rows.

-When a record is inserted in the table that record will be there on INSERTED Magic table.

-When a record is updated in the table that existing record will be there on DELETED Magic table and modified data will be there in INSERTED Magic table.

-When a record is deleted from that table that record will be there on DELETED Magic table.

45. What is a Linked Server?

  • When we want to query on remote database server along with the local database server then we can add the remote SQL server to local SQL server in a same group using the concept called Linked Server.
  • We can query on both servers using T-SQL.
  • We can use stored Procedure sp_addlinkedserver, sp_addlinkedsrvlogin to add new Linked Server.
  • By using Linked Server we can SQL statement in clean and easy way to retrieve, join and combine remote data with local data.

46. What is a WITH(NOLOCK)?

  • WITH(NOLOCK) is used to unlock the data which is locked by the transaction that is not yet committed. This command is used before SELECT statement.
  • When the transaction is committed or rolled back then there is no need to use NOLOCK function because the data is already released by the committed transaction.
  • Syntax: WITH(NOLOCK)
  • Example:

47. What is Resource governor?


Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server.


48. 2008 Features?



49. Why is resource governor required?


If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task.


50. Does SQL Server have any default resource governor component?


Yes, SQL Server have two by default created resource governor component.

1) Internal –This is used by database engine exclusives and user have no control.

2) Default – This is used by all the workloads which are not assigned to any other group.


51. What are the major components of the resource governor?


  • Resource Pools
  • Workload Groups
  • Classification

In simple words here is what the process of resource governor is.

  • Create resource pool
  • Create a workload group
  • Create classification function based on the criteria specified
  • Enable Resource Governor with classification function

Let me further explain you the same with graphical image.




52. Explain the following:

a.) Dirty pages.
These are the buffer pages that contain modifications which have not been written to disk.

b.) ETL – Extraction, Transformation, and Loading.
– It is the process of copying and cleaning data from heterogeneous sources.
– It is an important part of development projects for data warehousing and business intelligence.


53. Differentiate between a Local and a Global temporary table?

– A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.

– Global temporary tables (created with a double “##”) are visible to all sessions.
– Global temporary tables are dropped when the session that created it ends, and all other sessions have stopped referencing it.

54. What is CTE (Common Table Expression)?

  • When a complex SQL statement has number of joins then it can be made easier by using Common Table Expression.
  • Consider the following SQL statement.

SELECT emp.EmpName,dept.Department,emp.Gender FROM Employee emp 
left join Department dept on emp.DeptID = dept.DeptID) E
WHERE E.Gender = ‘Male’

The syntax of CTE is as follow

– The CTE Name (followed by WITH keyword)
– The Column List (Optional)
– The Query (Appears within parentheses after the AS keyword)

If we write the above messy query using CTE it would be like

With E(EmpName, Department, Gender)
SELECT emp.EmpName,dept.Department,emp.Gender FROM Employee emp 
left join Department dept on emp.DeptID = dept.DeptID
WHERE E.Gender = ‘Male’

This way the query can be made more readable and easy to understand.

55.replication prerequisites:

1. Verify connectivity between publisher, distributor and subscriber. Ifyou are unable to access then have your network team to open firewall ruleso that these servers can communicate.

2. You need to be sysadmin in order to setup replication.

3. Primary Key for each table. That should be fine anyway as it alreadyhas replication setup.

4. Allocate enough space for Transaction log

5. Do not use explicit transactions in trigger at the subscriber if you

currently have because that can raise error sometimes.

56. How to Reverse Log Shipping Roles

Reversing log shipping is an often overlooked practice. When DBAs need to fail over to a secondary log shipping server, they tend to worry about getting log shipping back up later. This is especially true in the case of very large databases. If you’re using log shipping as your primary disaster recovery solution and you need to fail over to the secondary log shipping server, you should get log shipping running as quickly as possible. With no disaster recovery failover in place, you might be running exposed.

Reversing log shipping is simple. It doesn’t require reinitializing the database with a full backup if performed carefully. However, it’s crucial that you remember the following:

  • You need to preserve the log sequence number (LSN) chain.
  • You need to perform the final log backup using the NORECOVERY option. Backing up the log with this option puts the database in a state that allows log backups to be restored and ensures that the database’s LSN chain doesn’t deviate.
  • The primary log shipping server must still be accessible to use this technique.

To fail over to a secondary log shipping server, follow this 10-step process:

  1. Disable all backup jobs that might back up the database on both log shipping partners.
  2. Disable the log shipping jobs.
  3. Run each log shipping job in order (i.e., backup, copy, and restore).
  4. Drop log shipping.
  5. Manually back up the log of the primary database using the NORECOVERY option. Use the command

BACKUP LOG [DatabaseName]
  TO DISK = ‘BackupFilePathname’

where DatabaseName is the name of the database whose log you want to back up and BackupFilePathname is the backup file’s pathname (e.g., Z:\SQLServerBackups\TLog.bck).

  1. Restore the log backup on the secondary database using the RECOVERY option, and bring the secondary database online. The primary and secondary databases have now switched positions.
  2. Back up the log of the new primary database (optional).
  3. Restore the log on the new secondary database using the NORECOVERY option (optional).
  4. Reconfigure log shipping.
  5. Re-enable any backup jobs that were disabled.

Note that step 7 and step 8 are listed as optional because they’re not required for establishing log shipping. However, I recommend performing these steps to ensure that the log shipping configuration will proceed without any problems.

With a few minor adjustments, this 10-step process works with multiple secondary log shipping databases. You perform the same basic steps, keeping in mind that the other secondary databases will still be secondary databases after the failover. After you back up the log on the new primary database, you should use the NORECOVERY option to restore that backup on all the planned secondary databases. You can then add them as secondary databases to the new primary database.

57. Is it possible configuration of log shipping without domain?

You could try to move the logs using FTP then you don’t need a domain account for the copy just FTP access.

58. Quorum types in Mirroring?

A Quorum is the relationship between the Witness,Principal and the Mirror.Depending on the mode of operation it is divided into 3.

Full Quorum —> This is when all 3 Witness,Principal and the Mirror can communicate with each other.Since witness is present automatic failover occurs.

Quorum —> This state exist if the Witness and either partner can communicate with it.

Partner-to-Partner —> When only the Principal and Mirror can communicate with each other.

59. Types of startup Service accounts?

Local User Account: This user account is created in your server where SQL Server is installed, this account does not have access to network resources.

Local Service Account: This is a builtin windows account that is available for configuring services in windows. This account has permissions as same as accounts that are in the users group, thus it has limited access to the resources in the server. This account is not supported for SQL SERVER and AGENT services.

Local System Account:
 This is a builtin windows account that is available for configuring services in windows. This is a highly privileged account that has access to all resources in the server with administrator rights.

Network Service Account: This is a builtin windows account that is available for configuring services in windows. This has permissions to access resources in the network under the computer account.

Domain Account: This account is a part of your domain that has access to network resources for which it is intended to have permission for. It is always advised to run SQL Server and related services under a domain account with minimum privilege need to run SQL Server and its related services.

Where can you see the Local System, Local Service and Network Service accounts? These are windows in-built accounts that are part of the operating system and assigned to the users, you cannot use the account to login to the system, these accounts are meant to be used for securing and authentication mechanism.

60. Replication Agents Behaviour?

Replication Agents

The following are the replication agents and let us discuss each one of those in detail.

Snapshot Agent – It is used for creating and propagating the snapshots used in snapshot replication. Each Published database has its own Snapshot Agent that runs on the distributor and connects to the publisher and takes a snapshot of the objects. It then updates the information in the distribution database. Snapshot Agents are used with all types of replication.

Distribution Agent – It applies the data from snapshot replication or transactions from transactional replication to subscribers. It can run on the distributor or on subscribers. It runs on the distributor for push subscriptions and on subscriber for pull subscriptions. This agent is not used with merge replication.

Merge Agent – The Merge agent is used in Merge replication and it merges the incremental changes that have occurred since the last reconciliation. In Merge replication there is no distribution or snapshot agent involved instead the Merge agent communicates with both the publisher and distributor. There is one Merge agent for each merge subscription.

Log Reader – The Log reader is used for transactional replication. It moves the information from transaction log in the publisher to the distribution database. Each database that is using transaction replication has its own Log reader agent on the publisher.

Queue Reader Agent – This is used in Transaction replication with queued updating option. It takes the changes that has occurred in the subscriber and applies them to the publisher. There is only one Queue reader agent per database.

61. Isolation Levels?

This article provides an easy-to-understand view of what Isolation Levels really mean and when to use which level. Isolation Level is a setting that decides how data which is a part of an ongoing transaction is made visible to other transactions.

Before going into know the isolation type let us first have a look of what are the problems that might occur if isolation levels are not set properly.

1. Dirty Read

This occurs when a second transaction selects a row which is being modified by another transaction which is not committed. Since the data is not committed there is a possibility that we might end up reading data which is not the actual one that should reflect.

2. Non-repeatable read

This occurs when a second transaction reads a particular row many times within a single transaction and each time end up with a different value. Take a case for a scenario you read a data and do some manipulation by that time there is another transaction that has modified the same row and after your manipulations you read the row again but find to be changed. This case is called as non-repeatable read.

3. Phantom rows

This occurs when you have already read a range of values during which time they have again changed. The rows that have changed are called as Phantom rows.

Now having know what would each problem cause. Let us look at ways of avoiding or minimizing them with the isolation level settings.

Isolation levels

* Read Uncommitted

This is as good (or bad) as not having any isolation. All data which is uncommitted is readable from any connection. This should not be used unless you have a very good reason to do so.

* Read Committed

This prevents dirty reads. This does not prevent phantoms or non-repeatable reads. This is the default. Although it is the default, it does not mean that this isolation level is ideal for all transactions. One of the regular tendencies amongst techies is to simply use default values without giving it a second thought! I cannot imagine the number of phantom and non-repeatable reads that must be occurring in the world because someone simply used the default value. It is a scary thought to say the least.

This level is obviously more restrictive than the Read Uncommitted level.

* Repeatable read

This prevents dirty reads as well as non-repeatable reads. It does not prevent phantom rows. This is more restrictive than Read Committed level. When I say restrictive what does it mean? It means that the chances of other transactions having to wait for this one to finish are INCREASED. Another way of saying this is Repeatable Read reduces concurrency compared to Read Committed level.

* Serializable

This is the most restrictive of the options. This should never be used as the default level. If you use this one as the default, it will most probably create a single user system!

So now lets look at what isolation level can be used for what problem as given in the table below.


Dirty Read

Read Committed (Default of SQL Server)

Dirty Read and Repeatable Read

Non-Repeatable Read

Dirty Read and Non-Repeatable Read and Phantom Rows


To retain all three problems

Read Uncommitted

How to set isolation levels:

The syntax to set isolation levels is as given below


Hope this article gave a understanding of the isolation levels, different problems due to incorrect isolation levels and how to solve them.

62. What is the use of DBCC commands?

DBCC stands for database consistency checker. There are many DBCC command in SQL Server. We generally use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.

63. Is it possible to truncate the table which is the part of replication ?

No it is not possible to truncate the table which is the part of replication. Because when you truncate the table replication broke and you are in trouble. So instead of truncating the table you can delete all the data. Truncate table command always reseed the identity column so in this case you can use reseed identity column.

64 Why can’t I run TRUNCATE TABLE on a published table?

When we TRUNCATE any table it does not log any operation and it does not fire triggers. It is not permitted because replication cannot track the changes caused by the operation.
Transactional replication tracks changes through the transaction logged; merge replication tracks changes through triggers on published tables.

65 What is Collation in SQL Server ?

1. A collation specifies the bit representation of each character in a data set.
2. Collation is also determine the rules that sort and compare the data.
3. SQL Server supports storing objects that have different collations in a single database.
4. For non-Unicode columns, the collation setting specifies the code page for the data and which characters can be represented.
6. Data that is moved between non-Unicode columns must be converted from the source code page to the destination code page.

The result of T-SQL statement can vary when the statement is run in the context of different databases that have different collation settings. If it is possible, to use a standardized collation for your organization.
This way, you do not have to explicitly specify the collation in every character or Unicode expression. If you must work with objects that have different collation and code page settings, code your queries to consider the rules of collation precedence.

66. What are the options associated with collation in SQL Server ?

The options associated with a collation are :
1. Case Sensitivity.
2. Accent Sensitivity.
3. Kana-Sensitivity.
4 Width Sensitivity.

These options are specified by appending them to the collation name.

For example, this collation Japanese_Bushu_Kakusu_100_CS_AS_KS_WS is case-sensitive, accent-sensitive, Kana-sensitive, and width-sensitive.

1. Case-sensitive (_CS) : Distinguishes between uppercase and lowercase letters. If selected, lowercase letters sort ahead of their uppercase versions. If this option is not selected, the collation will be case-insensitive. That is, SQL Server considers the uppercase and lowercase versions of letters to be identical for sorting purposes. You can explicitly select case insensitivity by specifying _CI.

2. Accent-sensitive (_AS) : Distinguishes between accented and unaccented characters. For example, ‘a’ is not equal to ‘ấ’. If this option is not selected, the collation will be accent-insensitive. That is, SQL Server considers the accented and unaccented versions of letters to be identical for sorting purposes. You can explicitly select accent insensitivity by specifying _AI.

3. Kana-sensitive (_KS) : Distinguishes between the two types of Japanese kana characters: Hiragana and Katakana.If this option is not selected, the collation is Kana-insensitive. That is, SQL Server considers Hiragana and Katakana characters to be equal for sorting purposes. Omitting this option is the only method of specifying Kana-insensitivity.

4. Width-sensitive (_WS) : Distinguishes between a single-byte character and the same character when represented as a double-byte character. If this option is not selected, SQL Server considers the single-byte and double-byte representation of the same character to be identical for sorting purposes. Omitting this option is the only method of specifying width-insensitivity.

67. What are the supported level of Collation in SQL Server ?

Setting collations are supported at the following levels of an instance of SQL Server:

Server-level collations :
The default server collation is set during SQL Server setup, and also becomes the default collation of the system databases and all user databases. Note that Unicode-only collations cannot be selected during SQL Server setup because they are not supported as server-level collations.

After a collation has been assigned to the server, you cannot change the collation except by exporting all database objects and data, rebuilding the master database, and importing all database objects and data. Instead of changing the default collation of an instance of SQL Server, you can specify the desired collation at the time that you create a new database or database column.

Database-level collations :
When a database is created or modified, you can use the COLLATE clause of the CREATE DATABASE or ALTER DATABASE statement to specify the default database collation. If no collation is specified, the database is assigned the server collation.
You cannot change the collation of system databases except by changing the collation for the server.

The database collation is used for all metadata in the database, and is the default for all string columns, temporary objects, variable names, and any other strings used in the database. When you change the collation of a user database, know that there is the potential for collation conflicts when queries in the database access temporary tables. Temporary tables are always stored in the tempdb system database, which will use the collation for the instance. Queries that compare character data between the user database and tempdb may fail if the collations cause a conflict in evaluating the character data. You can resolve this by specifying the COLLATE clause in the query.

Column-level collations :
When you create or alter a table, you can specify collations for each character-string column by using the COLLATE clause. If no collation is specified, the column is assigned the default collation of the database.

Expression-level collations :
Expression-level collations are set when a statement is run, and they affect the way a result set is returned. This enables ORDER BY sort results to be locale-specific. Use a COLLATE clause such as the following to implement expression-level collations:

68. What are the collation rules in SQL Server ?

The collation label of a simple expression that references only one character string object is the collation label of the referenced object.

The collation label of a complex expression that references two operand expressions with the same collation label is the collation label of the operand expressions.

The collation label of the final result of a complex expression that references two operand expressions with different collations is based on the following rules:

1.Explicit takes precedence over implicit. Implicit takes precedence over Coercible-default:
Explicit > Implicit > Coercible-default

2.Combining two Explicit expressions that have been assigned different collations generates an error:
Explicit X + Explicit Y = Error

3.Combining two Implicit expressions that have different collations yields a result of No-collation:
Implicit X + Implicit Y = No-collation

4.Combining an expression with No-collation with an expression of any label, except Explicit collation (see the following rule), yields a result that has the No-collation label:
No-collation + anything = No-collation

5.Combining an expression with No-collation with an expression that has an Explicit collation, yields an expression with an Explicit label:
No-collation + Explicit X = Explicit

68. When is the UPDATE_STATISTICS command and when it is used?

UPDATE_STATISTICS command is used when a huge amount of data is required to process. When deletions, modification or Bulk Copy occurs on the table which is having huge amount of, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the statistics of the indexes or indexed view of the tables accordingly.

Updates query optimization required statistics updates on a table or indexed view. By default, the query optimizer updates statistics as necessary to improve the query plan; in some cases you need to improve query performance by using UPDATE STATISTICS command or thestored procedure sp_updatestats to update statistics more frequently than the default updates.

Updating statistics ensures that queries compile with up-to-date statistics.

However, updating statistics causes queries to recompile. It is recommend to not updating statistics too frequently because there is a always performance trade-off between improving query plans and the time it takes to recompile queries. UPDATE STATISTICS can use tempdb to sort the sample of rows for building statistics.

1. Updating All Statistics with sp_updatestats

EXEC sp_updatestats

2. Update all statistics on a table


3. Update the statistics for an index


69. What are the DMV’s in SQL Server ?

Dynamic management views and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.


Dynamic management views and functions return internal, implementation-specific state data. Their schemas and the data they return may change in future releases of SQL Server. Therefore, dynamic management views and functions in future releases may not be compatible with the dynamic management views and functions in this release. For example, in future releases of SQL Server, Microsoft may augment the definition of any dynamic management view by adding columns to the end of the column list. We recommend against using the syntax SELECT * FROM dynamic_management_view_name in production code because the number of columns returned might change and break your application.

70. What are the execution plan recommendations ?

Following are the execution plan cache recommendations

1. Explicitly Parametrized Variable Parts of a Query
2. Create Stored Procedures to Implement Business Functionality
3. Code with sp_executesql to avoid Stored Procedure Maintenance.
4. Implement the Prepare/Execute Model to Avoid Resending s Query String
5. Avoid Ad Hoc Queries
6. Prefer sP_execute over EXECUTE for the dynamic Queries
7. Parametrized Variable Parts of Queries with Care
8. Do not Allow Implicit Resolution of Object in Queries.

71. What is difference between EXEC(@sql) and sp_executeSQL @sql ?

When we execute a T-SQL query string we use both EXEC and sp_executesql statements at a time.

But there are following most important differences between EXEC and sp_executeSQL.

1. sp_executesql allows for statements to be parametrized therefore It’s more secure than EXEC in terms of SQL injection.


2. sp_executesql can take the advantage cached query plans. The T-SQL string is compiled once and run every time so execution plan reused by the sp_executesql.

3. Temp tables created in EXEC can not use temp table caching mechanism

4. The T-SQL statements in the executed sql query variable string do not have access to any of the variables declared in the batch that contains the sp_executesql or EXECUTE statement.

The batch containing the sp_executesql or EXECUTE statement does not have access to variables or local cursors defined in the executed string.

5. If the executed string has a USE statement in the string that changes the database context, the change to the database context only lasts until sp_executesql or the EXECUTE statement completes.

72. What is an execution plan in sql server ?

SQL query define what you want to do but does not tell to the server how to do it. Using an SQL query, for instance you write a query to retrieve all employee working in Neekhra Technologies Pvt. Ltd. When the server receives the statement, the first thing it does is to parse it [Lexical Analysis]. If there is no syntax error, the server continue to execute your request. It will decide the best way to compute the results. The server chooses whether it is best way to read completely the table of employees, or whether using an index on the employee id column would be faster. It compares the cost of all possible approaches. The way that a statement can be physically executed is called an execution plan or a query plan.

The part of the sql server database that is responsible for computing the optimal execution plan is called the query optimizer. The query optimizer takes its decision on its knowledge of the database content and query optimizer elements available in the database/tables.

73. What are the major performance killers in SQL Server ? Or What are the 12 major performance bottlenecks in sql server ?

Below are the following major performance killers in SQL Server
1. Poor Indexing
2. Inaccurate Statistics
3. Excessive Blocking and Deadlocks.
4. Non-set-based operations. Usually T-SQL cursors.
5. Poor Query Design.
6. Poor Database Design.
7. Excessive Fragmentation.
8. Nonreusable execution plans.
9. Poor execution plan, usually caused by parameter sniffing
10. Frequent recompilation of execution plans.
11. Improper use of cursors.
12. Excessive use of improper configuration of tempDB

74. What are the index designing recommendations in sql server ? Or What are the criteria for index creation in sql server ?

The main recommendation for index designing in SQL Server are as follows:

1. Analyze the where clause and sql join criteria columns in the query.
2. Use narrow indexes on the tables.
3. Analyze column uniqueness in the table.
4. Analyze the column data type.
5. Consider column order in the table.
6. Consider the type of index (Clustered Vs. Non-Clustered)
7. Prefer int data type column and primary key columns when creating the index on a table. 

75. What is row locator ?

AN index row of a non-clustered index contains a pointer to the corresponding data row of the table, This pointer is called row-locator.

76. Can you create index on the BIT data type column in SQL Server ?

Yes we can, Microsoft SQL Server provides the facility to create the index on the BIT data type column. But there is not a significant advantage to create the index on BIT data type column as this type of column will have maximum two unique values and also the selectivity on this column is very low in most of the scenarios.


77. Explain the NOLOCK optimizer hint and some advantage and disadvantage of NOLOCK hint?

1. The NOLOCK query hint allows SQL Server to ignore the normal locks that are placed and held for a transaction allowing the query to complete without having to wait for the first transaction to finish and therefore release the locks.

2. This is one short term fix to help prevent locking, blocking or deadlocks.
However, when the NOLOCK hint is used, dirty data is read which can compromise the results returned to the user.


78. How to check If SQL Server is suing Kerberos authentication?

SELECT net_transport, auth_scheme FROM sys.dm_exec_connections WHERE session_id = @@spid


  1.  How to set correct value for SQL Server Max server memory?

There is no magic formula for this. Estimate the memory required by other applications running on same server, Operating system, Drivers , SQL Server Non- bPool allocations, jobs, anti virus etc.. Make sure you have acceptable available physical memory even when the system is under heavy load.

1. Consider the operating system memory requirement.

     Approximately 1 GB (Would increase if it is DC, cluster etc.)

2. Consider the memory requirements by other applications/processes running on the server.

   You have to derive it based on applications/processes/AV’s running on the system and their memory requirements. (Perfmon Process-> Private bytes and Working set can help)

3. Consider the memory requirements of the drivers/firmwares.

    You have to derive it based on memory requirements by drivers installed on the system. (RAMMAP can help)

4. Consider the NonbPool (aka MTL or MTR) memory requirements by SQL Server.

select  sum(multi_pages_kb)/1024 as multi_pages_mb from sys.dm_os_memory_clerks

(You can skip above query if your SQL Server version is 2012)


Max worker threads * 2MB


Memory for direct Windows allocations approximately 0 to 300 MB in most of the cases but you may have to increase it if there are many 3 party components loaded in SQL Server process (Including linked server dll’s, 3rd party backup dll’s etc.)


If you are using CLR extensively add some additional memory for CLR.   

5. Consider the memory requirement by jobs (Including replication agents, Log shipping etc. )  and packages that will run on the server.

You have to derive (May vary from few Mb’s to GB’s)

6. Consider SSAS and RS memory requirements.

You have to derive

7. Make sure there is good enough free space for operating system.

Approximately (100 MB for each GB till 4G) + (50 MB for each additional GB till 12GB) + (25 MB for each additional GB till your RAM size)

8. Other memory requirements.

If you have any other memory requirement specific to your environment.

Once you have calculated a reasonable value for all the above memory requirements  take the sum of all the above requirements and deduct it with total physical memory to derive an ideal value for your max server memory.

Max server memory=  Total physical memory  – (1+2+3+4+5+6+7+8)


80. Calculate the size of MemtoLeave and reserve it using the algorithm below

MTL (Memory to Leave)= (Stack size * max worker threads) + Additional space (By default 256 MB and can be controlled by -g).

Stack size =512 KB per thread for 32 Bit SQL Server and 904 KB for 32Bit SQL Server running on 64-Bit systems.

I.e = (256 *512 KB) + 256MB =384MB

-g switch is used to increase the additional space from 256 to any desired value.


 81. Calculate the size of BPOOL using below algorithm.

SQL Server Buffer Pool is minimum of “Physical RAM “ or “user mode memory(2GB or 3GB) – MTL- BUF structures”

BPool = Minimum (Physical memory, User address space – MTL) – BUF structures

Buf structures are arrays maintained by sqlserver to track the status of each buffer in BPOOL . SQL Server makes maximum of 32 allocation requests to the OS to reserve bpool pages.

SQL Server maintains contiguous array to track status information associated with each buffer (8 KB page) in the BPool. In addition SQL Server maintains a second array to track the committed and reserved bitmap pages.

This bit can be 0 or 1 . 1 indicates buffer is committed and 0 indicated page is reserved.

Size of Buf structure is approximately 16 MB when AWE is not enabled and when AWE is enabled buf structures use additional 8MB per each GB of RAM in the system.

82. DBCC Commands?


Informational Statements











Validation Statements









Maintenance Statements









Miscellaneous Statements

DBCC dllname (FREE)