Ranita works as an Assistant Professor at the Computer Science and Engineering Department at IIT Roorkee. She has completed her PhD from the Computer Science and Engineering Department at IIT Kharagpur. Before that, she has done her MTech from IIEST Shibpur, and got job offers from companies like Amazon, Samsung Labs etc. However, she preferred to pursue higher studies to stay in the field of academics and teaching. She believes in the motto of free education and contribution.


Alma Mater:

IIT Kharagpur
2012 to 2016
Master of Engineering
Indian Institute of Engineering Science and Technology, Shibpur
2010 to 2012
Bachelor of Technology
Kalyani Government Engineering College
2005 to 2009


Assistant Professor
Indian Institute of Technology Roorkee
Project Linked Personnel
Indian Statistical Institute, Kolkata
2009 to 2010
7 months 1 week ago

The correct answer is (b) remains same. To understand this, let us revise the informal algorithm for Timestamp based protocol for concurrency control (copied from Wikipedia, you can also refer to the formal version of the algorithm there):

1) If a transaction wants to read an object,

a) but the transaction started before the object's write timestamp it means that something changed the object's data after the transaction started. In this case, the transaction is canceled and must be restarted.
b) and the transaction started after the object's write timestamp, it means that it is safe to read the object. In this case, if the transaction timestamp is after the object's read timestamp, the read timestamp is set to the transaction timestamp.
2) If a transaction wants to write to an object,

a) but the transaction started before the object's read timestamp it means that something has had a look at the object, and we assume it took a copy of the object's data. So we can't write to the object as that would make any copied data invalid, so the transaction is aborted and must be restarted.
b) and the transaction started before the object's write timestamp it means that something has changed the object since we started our transaction. In this case, we use the Thomas write rule and simply skip our write operation and continue as normal; the transaction does not have to be aborted or restarted
c) otherwise, the transaction writes to the object, and the object's write timestamp is set to the transaction's timestamp.

Now consider the following example schedule:

T1      T2      T3      T4

Consider that after the execution of the last R(A), T4 needs to rollback (as given in the question). So, RTS(A) was 4 and now we need to decide what should be the new RTS for A. Note that, the schedule at hand now is something like the following:

T1      T2      T3

Independent of T4, this schedule will have RTS(A) = 3 when T2 tries to write and hence T2 should be aborted by rule 2(a).

Now, after rollback of T4, consider option (a) i.e. changing the RTS(A) to 0. This will certainly allow W(A) by T2 to execute without any problem as it falls under rule 2(c) and hence violates concurrency by allowing T2 to write a data already read by T3.

Consider option (c) i.e. RTS(A) becomes equal to the timestamp of the transaction which read 'A' just before T4, in this example then RTS(A) becomes 1. And hence the same problem arises as choosing option (a).

Therefore, if we choose option (b) and keep the RTS(A) same as before i.e. 4, then W(A) by T2 will fall under rule 2(a) and hence will be aborted. So, option (b) is the correct answer.

This is why RTS is kept as the largest of the timestamps of the transactions which has read the data, and not the recent timestamp.

Example text
8 months 3 weeks ago

This is an example of fan trap and chasm trap and how to resolve these scenarios. The following brief explanation may help you in understanding the concept which is usually found to be dubious across the web. This explanation is made in parity with the concept explained in Wikipedia (https://en.wikipedia.org/wiki/Entity–relationship_model#Model_usability_issues). You can also refer to https://db.grussell.org/section005.html to understand in detail.

Example text
10 months 1 week ago

You are very close to the answer. But, there is one catch; the first row of pixels (at y = 0) and the last row of pixels (at y = 37) are not full length. You may try to modify the approach accordingly to get to the correct answer. Very good presentation, by the way.

Example text
10 months 2 weeks ago

Consider the attribute set ABCDEG and the FD set
AB → C, AC → B, AD → E, B → D, BC → A, E → G
Is the following decomposition of R(ABCDEG)
(a) dependency-preserving?
(b) lossless-join?
Give proper justification for your answer.

jayantachakr's picture
Jayanta Chakraborty
innovwelt's picture
arvind.rawat's picture
Arvind Rawat
pritam's picture
Pritam Prasun
techtud's picture
Techtud Admin
pritam's picture
Pritam Prasun
vivek14's picture
Vivek Vikram Singh
pashupati's picture
Pashupati Jha
priyesh's picture
Priyesh Priyatam
girraj's picture
shreyans's picture
Shreyans Dhankhar
ribhu122's picture
Ribhu Ranjan
prafull's picture
Prafull Ranjan
antonio's picture
Antonio Anastasio Bruto da Costa
shabinmuhammed's picture
Shabin Muhammed
shuvankarchakraborty's picture


    StaffNo VARCHAR(5),
    StaffName CHAR(100),
    StaffSalary NUMERIC(5,2),
    DepartmentNo VARCHAR(5),
    SupervisorStaffNo VARCHAR(5),
    PRIMARY KEY (StaffNo),
    CONSTRAINT staff_self_key
        FOREIGN KEY (SupervisorStaffNo)
        REFERENCES Staff(StaffNo)

    ProjectNo VARCHAR(5),
    ProjectDesc CHAR(100),
    PrjMgrStaffNo VARCHAR(5),
    PRIMARY KEY (ProjectNo),
    CONSTRAINT staff_key
        FOREIGN KEY (PrjMgrStaffNo)
        REFERENCES Staff(StaffNo)

To list those members of staff who earn more than their supervisors, it seems a simple query like the following using aliasing should work.

SELECT A.StaffName, A.StaffSalary, B.StaffName, B.StaffSalary, B.DepartmentName
FROM Staff AS A, Staff AS B
WHERE A.SupervisorStaffNo = B.StaffNo AND A.StaffSalary > B.StaffSalary
ORDER BY B.DepartmentName;

more less

Most of the times, primary key for each entity set is also underlined in ER diagram. This tells us about the underlying FDs. In that case, along with cardinality and participation constraints, you also need to abide by the FDs.

more less

"In mathematics, and more specifically in graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path. In other words, any acyclic connected graph is a tree." --- taken from wikipedia.

"A polytree (or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree." --- also taken from wikipedia.

So, tree should always be considered as undirected graph.

more less

Ā is connected and we need to find the maximum possible edges in A. Therefore, we first need to see what is the minimum number of edges needed to make a graph with n vertices connected. The number of edges needed is (n-1).
Now, a complete graph with n vertices contains n(n-1)/2 edges.
So, if Ā is connected with minimum number of edges, the maximum number of edges in A is (n(n-1)/2) - (n-1) = (n-1)(n-2)/2.

So, correct answer is option (A) (n-1)(n-2)/2

more less
4 Aug 2016 - 7:46pm

Yes, there are other planarity checking algorithms as well. You can get the list here: https://en.wikipedia.org/wiki/Planarity_testing

more less

Why to go into all that trouble !!!

We can use unitary method as the time complexity O(nlogn) gives us the exact run time linear to the value of nlogn and not equal to the value of nlogn.
So, nlogn for n = 64 is 64*log64 = 64*6 (taking base as 2, anyway base does not matter at all, you can take any base you like. you will always get the correct answer).
Now, using unitary method (apply on nlogn and not n):
It takes 30 sec for 64*6
So, it takes 1 sec for 64*6/30
And, it takes 360 sec (i.e. 6 min) for 64*6*360/30 = 64*6*12 = 4608
Now, we need to find out n such that nlogn = 4608.
Going by the options, we get 256*log256 = 256*8 = 2048
and 512*log512 = 512*9 = 4608 (exact value, no approximation).
Hence, 512 is your answer, i.e. Option (B).
Note that, when we use log in a time complexity, the base does not matter, all it needs to be is a constant and not dependent on the input size.

more less

Chromatic number of a cycle graph with odd number of vertices is always 3, and with even number of vertices is always 2.
So, chromatic number of a cycle graph on 149 vertices is 3.

more less
11 Jun 2016 - 2:29pm

Thanks Arvind. Now corrected.

more less
11 Jun 2016 - 2:29pm

If you follow the execution for a given queue with increasing order of elements from rear to front, you will get something like this:

Now, see that for each element (except the last or smallest one), we once push it into the stack and in next iteration again pop it from the stack and enqueue at the rear of queue. This is the worst case of execution. So, we do (2n - 1) = 2*16 - 1 = 31 iterations to finally stabilize the smallest element in the stack. After this 31st iteration, we get back to the same situation as the initial input except the number of elements in the queue has now been decreased to 15. So, we can calculate that the total number of maximum possible iterations are:
(2n - 1) + (2(n-1) - 1) + (2(n-2) - 1) + ... (2(n-(n-1)) - 1)
= 2(n + (n-1) + (n-2) + ... + 1) - n
= 2n(n+1)/2 - n
= n(n+1) - n
= n2
= 162 = 256

more less
11 Jun 2016 - 11:10am

Just keep two points in mind. First, a 2D array can be thought of as an 1D array of 1D arrays. Second, when you increment (decrement) a pointer, it goes to the next (previous) element; so if you are working with the address, you have to consider the element size as well. That's why in the below figure, a + 3 calculates to be 5108 where a = 5000 (a is an array of 1D arrays, and each element of a is (9*4) = 36 bytes long, and I have considered the original 2D array a[10][9] to be an integer array with each integer taking 4 bytes space in memory). The whole calculation has been done considering row-major order of memory allocation.

more less