Ranita works as an Assistant Professor at the Computer Science and Engineering Department at IIT Roorkee. She has completed her PhD from the Computer Science and Engineering Department at IIT Kharagpur. Before that, she has done her MTech from IIEST Shibpur, and got job offers from companies like Amazon, Samsung Labs etc. However, she preferred to pursue higher studies to stay in the field of academics and teaching. She believes in the motto of free education and contribution.


Alma Mater:

IIT Kharagpur
2012 to 2016
Master of Engineering
Indian Institute of Engineering Science and Technology, Shibpur
2010 to 2012
Bachelor of Technology
Kalyani Government Engineering College
2005 to 2009


Assistant Professor
Indian Institute of Technology Roorkee
Project Linked Personnel
Indian Statistical Institute, Kolkata
2009 to 2010
7 months 1 week ago

The correct answer is (b) remains same. To understand this, let us revise the informal algorithm for Timestamp based protocol for concurrency control (copied from Wikipedia, you can also refer to the formal version of the algorithm there):

1) If a transaction wants to read an object,

a) but the transaction started before the object's write timestamp it means that something changed the object's data after the transaction started. In this case, the transaction is canceled and must be restarted.
b) and the transaction started after the object's write timestamp, it means that it is safe to read the object. In this case, if the transaction timestamp is after the object's read timestamp, the read timestamp is set to the transaction timestamp.
2) If a transaction wants to write to an object,

a) but the transaction started before the object's read timestamp it means that something has had a look at the object, and we assume it took a copy of the object's data. So we can't write to the object as that would make any copied data invalid, so the transaction is aborted and must be restarted.
b) and the transaction started before the object's write timestamp it means that something has changed the object since we started our transaction. In this case, we use the Thomas write rule and simply skip our write operation and continue as normal; the transaction does not have to be aborted or restarted
c) otherwise, the transaction writes to the object, and the object's write timestamp is set to the transaction's timestamp.

Now consider the following example schedule:

T1      T2      T3      T4

Consider that after the execution of the last R(A), T4 needs to rollback (as given in the question). So, RTS(A) was 4 and now we need to decide what should be the new RTS for A. Note that, the schedule at hand now is something like the following:

T1      T2      T3

Independent of T4, this schedule will have RTS(A) = 3 when T2 tries to write and hence T2 should be aborted by rule 2(a).

Now, after rollback of T4, consider option (a) i.e. changing the RTS(A) to 0. This will certainly allow W(A) by T2 to execute without any problem as it falls under rule 2(c) and hence violates concurrency by allowing T2 to write a data already read by T3.

Consider option (c) i.e. RTS(A) becomes equal to the timestamp of the transaction which read 'A' just before T4, in this example then RTS(A) becomes 1. And hence the same problem arises as choosing option (a).

Therefore, if we choose option (b) and keep the RTS(A) same as before i.e. 4, then W(A) by T2 will fall under rule 2(a) and hence will be aborted. So, option (b) is the correct answer.

This is why RTS is kept as the largest of the timestamps of the transactions which has read the data, and not the recent timestamp.

Example text
8 months 3 weeks ago

This is an example of fan trap and chasm trap and how to resolve these scenarios. The following brief explanation may help you in understanding the concept which is usually found to be dubious across the web. This explanation is made in parity with the concept explained in Wikipedia (https://en.wikipedia.org/wiki/Entity–relationship_model#Model_usability_issues). You can also refer to https://db.grussell.org/section005.html to understand in detail.

Example text
10 months 1 week ago

You are very close to the answer. But, there is one catch; the first row of pixels (at y = 0) and the last row of pixels (at y = 37) are not full length. You may try to modify the approach accordingly to get to the correct answer. Very good presentation, by the way.

Example text
10 months 2 weeks ago

Consider the attribute set ABCDEG and the FD set
AB → C, AC → B, AD → E, B → D, BC → A, E → G
Is the following decomposition of R(ABCDEG)
(a) dependency-preserving?
(b) lossless-join?
Give proper justification for your answer.

jayantachakr's picture
Jayanta Chakraborty
innovwelt's picture
arvind.rawat's picture
Arvind Rawat
pritam's picture
Pritam Prasun
techtud's picture
Techtud Admin
pritam's picture
Pritam Prasun
vivek14's picture
Vivek Vikram Singh
pashupati's picture
Pashupati Jha
priyesh's picture
Priyesh Priyatam
girraj's picture
shreyans's picture
Shreyans Dhankhar
ribhu122's picture
Ribhu Ranjan
prafull's picture
Prafull Ranjan
antonio's picture
Antonio Anastasio Bruto da Costa
shabinmuhammed's picture
Shabin Muhammed
shuvankarchakraborty's picture


Consider this as a graph problem. You have 23 vertices corresponding to 23 different compounds. Whenever a compound reacts with other, we have an edge in between them. So, according to the question, we have 9 vertices with exactly 3 edges. Therefore, total degree of these 9 vertices (subset S) is 27 (an odd number). We know that total degree of a graph is always even, because each edge contributes degree 2. Therefore, total degree of the vertices in U \ S must be odd to make the total degree of U even.

Now, we can go through the statements one by one. Statement I is not always true, because to make the total degree of U \ S odd, each vertex of U \ S don't need to have odd degree; only one vertex in U \ S having odd degree is enough. This makes Statement II always true. If all the vertices in U \ S have even degrees, then the total degree of U \ S cannot be odd. So, Statement III is always false. Therefore, our correct option is (b) Only II.

more less
24 May 2016 - 7:43pm

Stack permutation is used while the algorithm in concern has access to internal storage which is limited to only a single stack data structure. You can follow the below links for details:
Please ask in detail if you have any particular query regarding this.

more less

According to the notes link I shared "the capacity refers to the total number of data bytes that the cache can store. It does not include the overhead bits required by the tags, valid bits and LRU bits."
However, your problem asks for different capacity requirements for different mapping techniques. Therefore, it seems, we should consider the actual bits required (including overhead) to store each data block. So, the formula should include 1. The data  2. The tags  3. The valid bits  4. The dirty bits  and 5. The LRU bits. Please go through the notes to find out how these are calculated.

more less


This contains very useful examples on cache memory.
There is a section on 'Storage Requirements', please go through it once and try to solve your problem again. If you find the answer, do share.

more less

@isouravmishra direct mapping follows the principle of hashing, so you generate the key and check only at that particular position for match. So, capacity limit does not come into picture. On the other hand, in associative mapping, you put in new blocks in cache until the whole capacity is exhausted; so in this case, capacity limit should be considered.

more less

I am not entirely sure, but it should be something like the following.

Direct mapping:
Misses due to conflicts and compulsory page faults should be considered.
Therefore, hit ratio = (1000 - (150 + 100)) / 1000 = 750 / 1000 = 75%

Associative mapping:
Misses due to capacity limitations and compulsory page faults should be considered.
Therefore, hit ratio = (1000 - (100 + 100)) / 1000 = 800 / 1000 = 80%

@vivek14 can you please help in this?

more less
12 May 2016 - 4:42pm

Go through the first answer of this for an elaborate solution: http://math.stackexchange.com/questions/401937/how-is-nn-2n-4-1-equal-to...

more less
12 May 2016 - 3:36pm

The outer for loop iterates log n times.
For first iteration of the outer for loop, inner for loop iterates n times.
For second iteration of the outer for loop, inner for loop iterates n/2 times.
For third iteration of the outer for loop, inner for loop iterates n/4 times...

Therefore, the total number of iterations of the inner for loop or the total number of times the statement count += 1 executes equals to
n + n/2 + n/4 + ... + 1 (total log n number of terms) = 2n - 1 = O(n)
Therefore, time complexity is O(n).

more less
2 Feb 2016 - 3:58pm

I don't think so.

more less

Time complexity is O(n).

more less